You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Has anyone been able to get the seamless demo to work on their Mac M1?
I've been trying to get my translator model to use MPS, but the seamless interface doesn't appear to support it. Any suggestions would be much appreciated!
I've setup my translator for MPS and tried float32 and float16.
loc("varianceEps"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/0032d1ee-80fd-11ee-8227-6aecfccc70fe/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":233:0)): error: input types 'tensor<1x378x1xf16>' and 'tensor<1xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
[1] 94184 abort "/Volumes/Trebleet/Python Projects/omniglot/.venv/bin/python"
I get this error when using float32
/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/unity/nar_decoder_frontend.py:231: UserWarning: MPS: no support for int64 reduction ops, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:144.)
max_len = int(char_lens.sum(1).max().item())
/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/unity/nar_decoder_frontend.py:231: UserWarning: MPS: no support for int64 min/max ops, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:1271.)
max_len = int(char_lens.sum(1).max().item())
/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/unity/length_regulator.py:35: UserWarning: MPS: no support for int64 repeats mask, casting it to int32 (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Repeat.mm:236.)
upsampled_seqs[b, : upsampled_seq_lens[b]] = seqs[b].repeat_interleave(
Traceback (most recent call last):
File "/Volumes/Trebleet/Python Projects/omniglot/demo.py", line 89, in <module>
s2st_inference()
File "/Volumes/Trebleet/Python Projects/omniglot/demo.py", line 43, in s2st_inference
text_output, speech_output = translator.predict(
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/inference/translator.py", line 407, in predict
translated_audio_wav = self.vocoder(
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/vocoder/vocoder.py", line 49, in forward
return self.code_generator(x, dur_prediction) # type: ignore[no-any-return]
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/vocoder/codehifigan.py", line 101, in forward
return super().forward(x)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/seamless_communication/models/vocoder/hifigan.py", line 181, in forward
x = self.conv_pre(x)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
result = hook(self, args)
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py", line 65, in __call__
setattr(module, self.name, self.compute_weight(module))
File "/Volumes/Trebleet/Python Projects/omniglot/.venv/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py", line 25, in compute_weight
return _weight_norm(v, g, self.dim)
NotImplementedError: The operator 'aten::_weight_norm_interface' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Has anyone been able to get the seamless demo to work on their Mac M1?
I've been trying to get my translator model to use MPS, but the seamless interface doesn't appear to support it. Any suggestions would be much appreciated!
I've setup my translator for MPS and tried float32 and float16.
I get this error when using float16
I get this error when using float32
Beta Was this translation helpful? Give feedback.
All reactions