Replies: 2 comments
-
I'm not an expert in ONNX and I'm facing same issue with ONNX documentation
|
Beta Was this translation helpful? Give feedback.
-
Regarding training build vs inference build: Regarding large model training vs on-device training: We do need better documentation to make clear what ORT offerings are. Let me know if you have any more questions about training. |
Beta Was this translation helpful? Give feedback.
-
I have build the onnxruntime 17.3 now for AMD rocm by using the documentation how to do the build for inferencing and training.
https://onnxruntime.ai/docs/build/inferencing.html
https://onnxruntime.ai/docs/build/training.html
Documentation does not however explain what is the main difference between these builds. If I do the build for training, does it also include the inferencing features? Basically it training build just adds the "--enable_training" in top of the inferencing build.
Then there is also 2 AMD specific builds described in
https://onnxruntime.ai/docs/build/eps.html
and it's little unclear can I also enable both the "-use-rocm --rocm_home " and
"--use_migraphx --migraphx_home" flags on same build or does one flag remove some features from another flag?
Beta Was this translation helpful? Give feedback.
All reactions