We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用如下命令运行BERT预训练 : python run_pretrain.py --device_target Ascend --amp True --jit True --lr 2e-5 --warmup_steps 10000 --train_batch_size 256 --epochs 15 --save_steps 10000 --do_load_ckpt False --config config/bert_config_small.json
Traceback (most recent call last): File "run_pretrain.py", line 257, in train(model, optimizer, loss_scaler, grad_reducer, train_dataset, args.train_batch_size, jit=args.jit) File "run_pretrain.py", line 83, in train next_sentence_label, segment_ids) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 559, in staging_specialize out = _MindsporeFunctionExecutor(func, hash_obj, input_signature, process_obj, jit_config)(*args) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 98, in wrapper results = fn(*arg, **kwargs) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 360, in call phase = self.compile(args_list, self.fn.name) File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 323, in compile is_compile = self._graph_executor.compile(self.fn, compile_args, phase, True) RuntimeError: Preprocess failed before run graph 0, error msg: Distribute Task Failed, error msg: davinci_model : load task fail, return ret: 1343225860
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_kernel_executor.cc:214 PreprocessBeforeRunGraph mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:543 LoadTask mindspore/ccsrc/plugin/device/ascend/hal/device/ge_runtime/task/hccl_task.cc:103 Distribute
The text was updated successfully, but these errors were encountered:
No branches or pull requests
使用如下命令运行BERT预训练 :
python run_pretrain.py --device_target Ascend --amp True --jit True --lr 2e-5 --warmup_steps 10000 --train_batch_size 256 --epochs 15 --save_steps 10000 --do_load_ckpt False --config config/bert_config_small.json
Traceback (most recent call last):
File "run_pretrain.py", line 257, in
train(model, optimizer, loss_scaler, grad_reducer, train_dataset, args.train_batch_size, jit=args.jit)
File "run_pretrain.py", line 83, in train
next_sentence_label, segment_ids)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 559, in staging_specialize
out = _MindsporeFunctionExecutor(func, hash_obj, input_signature, process_obj, jit_config)(*args)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 98, in wrapper
results = fn(*arg, **kwargs)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 360, in call
phase = self.compile(args_list, self.fn.name)
File "/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/mindspore/common/api.py", line 323, in compile
is_compile = self._graph_executor.compile(self.fn, compile_args, phase, True)
RuntimeError: Preprocess failed before run graph 0,
error msg: Distribute Task Failed,
error msg: davinci_model : load task fail, return ret: 1343225860
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ascend_kernel_executor.cc:214 PreprocessBeforeRunGraph
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:543 LoadTask
mindspore/ccsrc/plugin/device/ascend/hal/device/ge_runtime/task/hccl_task.cc:103 Distribute
The text was updated successfully, but these errors were encountered: