Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with the Rank #2

Open
BOBaraki opened this issue Jun 13, 2024 · 1 comment
Open

Problem with the Rank #2

BOBaraki opened this issue Jun 13, 2024 · 1 comment

Comments

@BOBaraki
Copy link

Hello,

I've been trying to run train the model on 4 GPUs but I am having some problems with the rank of the GPUs. In fact it cannot see the arguement. More specifically:

bash train.sh 
/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
[2024-06-13 17:39:19,500] torch.distributed.run: [WARNING] 
[2024-06-13 17:39:19,500] torch.distributed.run: [WARNING] *****************************************
[2024-06-13 17:39:19,500] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
[2024-06-13 17:39:19,500] torch.distributed.run: [WARNING] *****************************************
[W socket.cpp:436] [c10d] The server socket cannot be initialized on [::]:29575 (errno: 97 - Address family not supported by protocol).
[W socket.cpp:663] [c10d] The client socket cannot be initialized to connect to [::ffff:127.0.0.1]:29575 (errno: 97 - Address family not supported by protocol).
import local models
usage: train.py [-h] [--dataset NAME] [--train-split NAME] [--val-split NAME] [--dataset-download] [--class-map FILENAME] [--model MODEL] [--pretrained] [--initial-checkpoint PATH] [--resume PATH]
                [--no-resume-opt] [--num-classes N] [--gp POOL] [--img-size N] [--in-chans N] [--input-size N N N N N N N N N] [--crop-pct N] [--mean MEAN [MEAN ...]] [--std STD [STD ...]]
                [--interpolation NAME] [-b N] [-vb N] [--channels-last] [--torchscript | --aot-autograd] [--fuser FUSER] [--fast-norm] [--grad-checkpointing] [--opt OPTIMIZER] [--opt-eps EPSILON]
                [--opt-betas BETA [BETA ...]] [--momentum M] [--weight-decay WEIGHT_DECAY] [--clip-grad NORM] [--clip-mode CLIP_MODE] [--layer-decay LAYER_DECAY] [--sched SCHEDULER] [--sched-on-updates]
                [--lr LR] [--lr-base LR] [--lr-base-size DIV] [--lr-base-scale SCALE] [--lr-noise pct, pct [pct, pct ...]] [--lr-noise-pct PERCENT] [--lr-noise-std STDDEV] [--lr-cycle-mul MULT]
                [--lr-cycle-decay MULT] [--lr-cycle-limit N] [--lr-k-decay LR_K_DECAY] [--warmup-lr LR] [--min-lr LR] [--epochs N] [--epoch-repeats N] [--start-epoch N]
                [--decay-milestones MILESTONES [MILESTONES ...]] [--decay-epochs N] [--warmup-epochs N] [--warmup-prefix] [--cooldown-epochs N] [--patience-epochs N] [--decay-rate RATE] [--no-aug]
                [--scale PCT [PCT ...]] [--ratio RATIO [RATIO ...]] [--hflip HFLIP] [--vflip VFLIP] [--color-jitter PCT] [--aa NAME] [--aug-repeats AUG_REPEATS] [--aug-splits AUG_SPLITS] [--jsd-loss]
                [--bce-loss] [--bce-target-thresh BCE_TARGET_THRESH] [--reprob PCT] [--remode REMODE] [--recount RECOUNT] [--resplit] [--mixup MIXUP] [--cutmix CUTMIX]
                [--cutmix-minmax CUTMIX_MINMAX [CUTMIX_MINMAX ...]] [--mixup-prob MIXUP_PROB] [--mixup-switch-prob MIXUP_SWITCH_PROB] [--mixup-mode MIXUP_MODE] [--mixup-off-epoch N] [--smoothing SMOOTHING]
                [--train-interpolation TRAIN_INTERPOLATION] [--drop PCT] [--drop-connect PCT] [--drop-path PCT] [--drop-block PCT] [--bn-momentum BN_MOMENTUM] [--bn-eps BN_EPS] [--sync-bn] [--dist-bn DIST_BN]
                [--split-bn] [--model-ema] [--model-ema-force-cpu] [--model-ema-decay MODEL_EMA_DECAY] [--seed S] [--worker-seeding WORKER_SEEDING] [--log-interval N] [--recovery-interval N]
                [--checkpoint-hist N] [-j N] [--save-images] [--amp] [--amp-dtype AMP_DTYPE] [--amp-impl AMP_IMPL] [--no-ddp-bb] [--pin-mem] [--no-prefetcher] [--output PATH] [--experiment NAME]
                [--eval-metric EVAL_METRIC] [--tta N] [--local_rank LOCAL_RANK] [--use-multi-epochs-loader] [--log-wandb]
                DIR
train.py: error: unrecognized arguments: --local-rank=0
import local models
import local models
usage: train.py [-h] [--dataset NAME] [--train-split NAME] [--val-split NAME] [--dataset-download] [--class-map FILENAME] [--model MODEL] [--pretrained] [--initial-checkpoint PATH] [--resume PATH]
                [--no-resume-opt] [--num-classes N] [--gp POOL] [--img-size N] [--in-chans N] [--input-size N N N N N N N N N] [--crop-pct N] [--mean MEAN [MEAN ...]] [--std STD [STD ...]]
                [--interpolation NAME] [-b N] [-vb N] [--channels-last] [--torchscript | --aot-autograd] [--fuser FUSER] [--fast-norm] [--grad-checkpointing] [--opt OPTIMIZER] [--opt-eps EPSILON]
                [--opt-betas BETA [BETA ...]] [--momentum M] [--weight-decay WEIGHT_DECAY] [--clip-grad NORM] [--clip-mode CLIP_MODE] [--layer-decay LAYER_DECAY] [--sched SCHEDULER] [--sched-on-updates]
                [--lr LR] [--lr-base LR] [--lr-base-size DIV] [--lr-base-scale SCALE] [--lr-noise pct, pct [pct, pct ...]] [--lr-noise-pct PERCENT] [--lr-noise-std STDDEV] [--lr-cycle-mul MULT]
                [--lr-cycle-decay MULT] [--lr-cycle-limit N] [--lr-k-decay LR_K_DECAY] [--warmup-lr LR] [--min-lr LR] [--epochs N] [--epoch-repeats N] [--start-epoch N]
                [--decay-milestones MILESTONES [MILESTONES ...]] [--decay-epochs N] [--warmup-epochs N] [--warmup-prefix] [--cooldown-epochs N] [--patience-epochs N] [--decay-rate RATE] [--no-aug]
                [--scale PCT [PCT ...]] [--ratio RATIO [RATIO ...]] [--hflip HFLIP] [--vflip VFLIP] [--color-jitter PCT] [--aa NAME] [--aug-repeats AUG_REPEATS] [--aug-splits AUG_SPLITS] [--jsd-loss]
                [--bce-loss] [--bce-target-thresh BCE_TARGET_THRESH] [--reprob PCT] [--remode REMODE] [--recount RECOUNT] [--resplit] [--mixup MIXUP] [--cutmix CUTMIX]
                [--cutmix-minmax CUTMIX_MINMAX [CUTMIX_MINMAX ...]] [--mixup-prob MIXUP_PROB] [--mixup-switch-prob MIXUP_SWITCH_PROB] [--mixup-mode MIXUP_MODE] [--mixup-off-epoch N] [--smoothing SMOOTHING]
                [--train-interpolation TRAIN_INTERPOLATION] [--drop PCT] [--drop-connect PCT] [--drop-path PCT] [--drop-block PCT] [--bn-momentum BN_MOMENTUM] [--bn-eps BN_EPS] [--sync-bn] [--dist-bn DIST_BN]
                [--split-bn] [--model-ema] [--model-ema-force-cpu] [--model-ema-decay MODEL_EMA_DECAY] [--seed S] [--worker-seeding WORKER_SEEDING] [--log-interval N] [--recovery-interval N]
                [--checkpoint-hist N] [-j N] [--save-images] [--amp] [--amp-dtype AMP_DTYPE] [--amp-impl AMP_IMPL] [--no-ddp-bb] [--pin-mem] [--no-prefetcher] [--output PATH] [--experiment NAME]
                [--eval-metric EVAL_METRIC] [--tta N] [--local_rank LOCAL_RANK] [--use-multi-epochs-loader] [--log-wandb]
                DIR
train.py: error: unrecognized arguments: --local-rank=3
usage: train.py [-h] [--dataset NAME] [--train-split NAME] [--val-split NAME] [--dataset-download] [--class-map FILENAME] [--model MODEL] [--pretrained] [--initial-checkpoint PATH] [--resume PATH]
                [--no-resume-opt] [--num-classes N] [--gp POOL] [--img-size N] [--in-chans N] [--input-size N N N N N N N N N] [--crop-pct N] [--mean MEAN [MEAN ...]] [--std STD [STD ...]]
                [--interpolation NAME] [-b N] [-vb N] [--channels-last] [--torchscript | --aot-autograd] [--fuser FUSER] [--fast-norm] [--grad-checkpointing] [--opt OPTIMIZER] [--opt-eps EPSILON]
                [--opt-betas BETA [BETA ...]] [--momentum M] [--weight-decay WEIGHT_DECAY] [--clip-grad NORM] [--clip-mode CLIP_MODE] [--layer-decay LAYER_DECAY] [--sched SCHEDULER] [--sched-on-updates]
                [--lr LR] [--lr-base LR] [--lr-base-size DIV] [--lr-base-scale SCALE] [--lr-noise pct, pct [pct, pct ...]] [--lr-noise-pct PERCENT] [--lr-noise-std STDDEV] [--lr-cycle-mul MULT]
                [--lr-cycle-decay MULT] [--lr-cycle-limit N] [--lr-k-decay LR_K_DECAY] [--warmup-lr LR] [--min-lr LR] [--epochs N] [--epoch-repeats N] [--start-epoch N]
                [--decay-milestones MILESTONES [MILESTONES ...]] [--decay-epochs N] [--warmup-epochs N] [--warmup-prefix] [--cooldown-epochs N] [--patience-epochs N] [--decay-rate RATE] [--no-aug]
                [--scale PCT [PCT ...]] [--ratio RATIO [RATIO ...]] [--hflip HFLIP] [--vflip VFLIP] [--color-jitter PCT] [--aa NAME] [--aug-repeats AUG_REPEATS] [--aug-splits AUG_SPLITS] [--jsd-loss]
                [--bce-loss] [--bce-target-thresh BCE_TARGET_THRESH] [--reprob PCT] [--remode REMODE] [--recount RECOUNT] [--resplit] [--mixup MIXUP] [--cutmix CUTMIX]
                [--cutmix-minmax CUTMIX_MINMAX [CUTMIX_MINMAX ...]] [--mixup-prob MIXUP_PROB] [--mixup-switch-prob MIXUP_SWITCH_PROB] [--mixup-mode MIXUP_MODE] [--mixup-off-epoch N] [--smoothing SMOOTHING]
                [--train-interpolation TRAIN_INTERPOLATION] [--drop PCT] [--drop-connect PCT] [--drop-path PCT] [--drop-block PCT] [--bn-momentum BN_MOMENTUM] [--bn-eps BN_EPS] [--sync-bn] [--dist-bn DIST_BN]
                [--split-bn] [--model-ema] [--model-ema-force-cpu] [--model-ema-decay MODEL_EMA_DECAY] [--seed S] [--worker-seeding WORKER_SEEDING] [--log-interval N] [--recovery-interval N]
                [--checkpoint-hist N] [-j N] [--save-images] [--amp] [--amp-dtype AMP_DTYPE] [--amp-impl AMP_IMPL] [--no-ddp-bb] [--pin-mem] [--no-prefetcher] [--output PATH] [--experiment NAME]
                [--eval-metric EVAL_METRIC] [--tta N] [--local_rank LOCAL_RANK] [--use-multi-epochs-loader] [--log-wandb]
                DIR
train.py: error: unrecognized arguments: --local-rank=2
import local models
usage: train.py [-h] [--dataset NAME] [--train-split NAME] [--val-split NAME] [--dataset-download] [--class-map FILENAME] [--model MODEL] [--pretrained] [--initial-checkpoint PATH] [--resume PATH]
                [--no-resume-opt] [--num-classes N] [--gp POOL] [--img-size N] [--in-chans N] [--input-size N N N N N N N N N] [--crop-pct N] [--mean MEAN [MEAN ...]] [--std STD [STD ...]]
                [--interpolation NAME] [-b N] [-vb N] [--channels-last] [--torchscript | --aot-autograd] [--fuser FUSER] [--fast-norm] [--grad-checkpointing] [--opt OPTIMIZER] [--opt-eps EPSILON]
                [--opt-betas BETA [BETA ...]] [--momentum M] [--weight-decay WEIGHT_DECAY] [--clip-grad NORM] [--clip-mode CLIP_MODE] [--layer-decay LAYER_DECAY] [--sched SCHEDULER] [--sched-on-updates]
                [--lr LR] [--lr-base LR] [--lr-base-size DIV] [--lr-base-scale SCALE] [--lr-noise pct, pct [pct, pct ...]] [--lr-noise-pct PERCENT] [--lr-noise-std STDDEV] [--lr-cycle-mul MULT]
                [--lr-cycle-decay MULT] [--lr-cycle-limit N] [--lr-k-decay LR_K_DECAY] [--warmup-lr LR] [--min-lr LR] [--epochs N] [--epoch-repeats N] [--start-epoch N]
                [--decay-milestones MILESTONES [MILESTONES ...]] [--decay-epochs N] [--warmup-epochs N] [--warmup-prefix] [--cooldown-epochs N] [--patience-epochs N] [--decay-rate RATE] [--no-aug]
                [--scale PCT [PCT ...]] [--ratio RATIO [RATIO ...]] [--hflip HFLIP] [--vflip VFLIP] [--color-jitter PCT] [--aa NAME] [--aug-repeats AUG_REPEATS] [--aug-splits AUG_SPLITS] [--jsd-loss]
                [--bce-loss] [--bce-target-thresh BCE_TARGET_THRESH] [--reprob PCT] [--remode REMODE] [--recount RECOUNT] [--resplit] [--mixup MIXUP] [--cutmix CUTMIX]
                [--cutmix-minmax CUTMIX_MINMAX [CUTMIX_MINMAX ...]] [--mixup-prob MIXUP_PROB] [--mixup-switch-prob MIXUP_SWITCH_PROB] [--mixup-mode MIXUP_MODE] [--mixup-off-epoch N] [--smoothing SMOOTHING]
                [--train-interpolation TRAIN_INTERPOLATION] [--drop PCT] [--drop-connect PCT] [--drop-path PCT] [--drop-block PCT] [--bn-momentum BN_MOMENTUM] [--bn-eps BN_EPS] [--sync-bn] [--dist-bn DIST_BN]
                [--split-bn] [--model-ema] [--model-ema-force-cpu] [--model-ema-decay MODEL_EMA_DECAY] [--seed S] [--worker-seeding WORKER_SEEDING] [--log-interval N] [--recovery-interval N]
                [--checkpoint-hist N] [-j N] [--save-images] [--amp] [--amp-dtype AMP_DTYPE] [--amp-impl AMP_IMPL] [--no-ddp-bb] [--pin-mem] [--no-prefetcher] [--output PATH] [--experiment NAME]
                [--eval-metric EVAL_METRIC] [--tta N] [--local_rank LOCAL_RANK] [--use-multi-epochs-loader] [--log-wandb]
                DIR
train.py: error: unrecognized arguments: --local-rank=1
[2024-06-13 17:39:24,518] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 2) local_rank: 0 (pid: 2465952) of binary: /home/gtzelepis/miniconda3/envs/dformer/bin/python3
Traceback (most recent call last):
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launch.py", line 196, in <module>
    main()
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launch.py", line 192, in main
    launch(args)
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launch.py", line 177, in launch
    run(args)
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
    elastic_launch(
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/gtzelepis/miniconda3/envs/dformer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2024-06-13_17:39:24
  host      : pmlabgpus
  rank      : 1 (local_rank: 1)
  exitcode  : 2 (pid: 2465953)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2024-06-13_17:39:24
  host      : pmlabgpus
  rank      : 2 (local_rank: 2)
  exitcode  : 2 (pid: 2465954)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2024-06-13_17:39:24
  host      : pmlabgpus
  rank      : 3 (local_rank: 3)
  exitcode  : 2 (pid: 2465955)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-06-13_17:39:24
  host      : pmlabgpus
  rank      : 0 (local_rank: 0)
  exitcode  : 2 (pid: 2465952)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

Any help is welcome

@yinbow
Copy link
Member

yinbow commented Jun 14, 2024

Thanks for your attention!
We guess that you may use higher version pytorch (e.g. 2.0).
There may be two ways to solve this problem:
(1) install the environment with pytorch 1.13 following our installation in the readme.
(2) Our code is highly based on the timm framework. The same problem also happen in timm, you can refer to huggingface/pytorch-image-models#1728 and huggingface/pytorch-image-models#1724.

If the two ways can not work, feel free to connect us and provide us more details to solve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants