-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows Camera post process(DMFT) with DirectML(Tensorflow) #385
Comments
From the DirectML sample, it is necessary to implement each operand in C++. Whether there is a similar Intel OpenVino, you can convert the tensortflow model into IR through the model optimizer, and let the C++ program load the IR . |
If you already have a trained python TensorFlow model, you could freeze it into a Once your model has been converted to a frozen I believe this is the most straightforward and fastest way to get your model working with DirectML. If you need help navigating the TensorFlow C API, please let us know! Edit: Alternatively, if you are familiar with ONNX and can convert your model to an ONNX model, you could even use onnxruntime instead of TensorFlow, which can use DirectML underneath. |
Hi PatriceVignola, Do you have relevant information or sample demo code, we can have a trial, thanks! |
Hi @MarkHung00, I created a basic sample over here. The sample goes through the process of loading a frozen squeezenet.pb model, creating a graph from it and finally creating a session. Feel free to extract the parts that are relevant for you and let me know if you run into any issues or have other questions! |
Hi PatriceVignola, Thank you very much for your assistance, we have successfully run with Visual Studio, and will use other models in the next stage, Thanks a lot |
@MarkHung00
For example, FP32 and FP16 is the data type the most commonly supported across DML operators, while int32 is reserved for CPU instead. |
Thanks a lot, We are currently developing some real time scenarios. The inference time of Nvidia GPU is still satisfactory. We have some questions
We hope that the inference framework can flexibly use various GPUs (Intel/AMD/Nvidia), directml is good choice, but in the future there may be models that require more computing power and longer inference time |
Also, take note that this repository (tensorflow-directml 1.15) is mostly in maintenance mode. We're still doing bug fixes and improving performance, but we're now more focused on the preview of our plugin for TF 2. We don't have a C API for the plugin yet, but it's coming soon! |
Highly appreciate your help, in addition, we have two questions
|
|
We are developing Camera post process @ DMFT (Windows User space DLL),
and we currently hope to run DirectML (with Tensorflow) in DMFT.
We have tried porting C++ sample(DirectMLSuperResolution) of DirectML to DMFT & it's work,
But the part of tensoftflow we don't know how to proceed
Best of best regards
The text was updated successfully, but these errors were encountered: