-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNXRuntime] Added builds with CUDA and TensorRT Execution Providers #4386
[ONNXRuntime] Added builds with CUDA and TensorRT Execution Providers #4386
Conversation
93cf3f8
to
82a33f6
Compare
e1db9ad
to
c2cdf0c
Compare
b245017
to
6b2963b
Compare
6b2963b
to
c2cdf0c
Compare
7857a03
to
f84461d
Compare
f3dbc70
to
0e641b2
Compare
0e641b2
to
17c64ca
Compare
@maleadt When you have time, could you take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I mean, this isn't the kind of generic recipe we'd want once everything is figured out (for that it would need to support multiple versions of CUDA), but we haven't figured any of that out yet, so in the meantime this seems OK if you can use the generated artifacts.
da3e36d
to
be022d6
Compare
0b45c10
to
ce62257
Compare
To support multiple versions in the future.
ce62257
to
cfadc5c
Compare
I'm taking as good Tim's approval. Thanks @stemann for the hard work! |
Likely dependent on #4369