-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python: add initial cudaFromGstSample support #10
base: master
Are you sure you want to change the base?
Conversation
nice! do you have an example including the GstSample ? which package do you use for that on a nano? thanks. |
Thanks for the contribution @aconchillo! If you have the code of reading video using this (presumably using appsink), would like to merge it all. |
New examples PR dusty-nv/jetson-inference#389 |
Added a new |
a557a85
to
1141aaa
Compare
5c2bd5c
to
f6ef0ec
Compare
There was an issue with |
f6ef0ec
to
0c77ec8
Compare
Renamed |
This patch adds initial bindings support for GStreamer. It currently only provides a function `cudaFromGstSample`. `cudaFromGstSample` expects a GstSample as a single argument. The GstSample needs to be in NV12 format and stored in host memory (this means there's no zero-copy). The function will then copy the frame to CUDA memory and convert it to RGBA (float4) directly in the GPU. A usage example could be: def on_new_sample(appsink): sample = appsink.emit('pull-sample') img, width, height = jetson.utils.cudaFromGstSample(sample) # classify the image class_idx, confidence = net.Classify(img, width, height) When using GStreamer for, let's say, decoding a video encoded in H.264, it is very likely that OpenMAX GStreamer plugins are used (i.e. `omxh264dec`). This will create a `video/x-raw(memory:NVMM)` frame that is mapped into NVMM memory (i.e. DMA buffer). Unfortunately, it's not straight forward to convert NVMM memory to CUDA, that's why `cudaFromGstSample` expects the frame to be stored in the host memory. To copy an NVMM mapped frame to host memory, your pipeline needs to include the `nvvidconv` plugin which will convert an NVMM mapped frame to a regular `video/x-raw` frame.
0c77ec8
to
48e529f
Compare
48e529f
to
5df2ded
Compare
Added a |
This patch adds initial bindings support for GStreamer. It currently only
provides a function
cudaFromGstSample
.cudaFromGstSample
expects a GstSample as a single argument. The GstSampleneeds to be in NV12 format and stored in host memory (this means there's no
zero-copy). The function will then copy the frame to CUDA memory and convert
it to RGBA (float4) directly in the GPU.
A usage example could be:
When using GStreamer for, let's say, decoding a video encoded in H.264, it is
very likely that OpenMAX GStreamer plugins are used (i.e.
omxh264dec
). Thiswill create a
video/x-raw(memory:NVMM)
frame that is mapped into NVMMmemory (i.e. DMA buffer). Unfortunately, it's not straight forward to convert
NVMM memory to CUDA, that's why
cudaFromGstSample
expects the frame to bestored in the host memory. To copy an NVMM mapped frame to host memory, your
pipeline needs to include the
nvvidconv
plugin which will convert an NVMMmapped frame to a regular
video/x-raw
frame.