Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

python: add initial cudaFromGstSample support #10

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

aconchillo
Copy link

This patch adds initial bindings support for GStreamer. It currently only
provides a function cudaFromGstSample.

cudaFromGstSample expects a GstSample as a single argument. The GstSample
needs to be in NV12 format and stored in host memory (this means there's no
zero-copy). The function will then copy the frame to CUDA memory and convert
it to RGBA (float4) directly in the GPU.

A usage example could be:

def on_new_sample(appsink):
    sample = appsink.emit('pull-sample')

    img, width, height = jetson.utils.cudaFromGstSample(sample)

    # classify the image
    class_idx, confidence = net.Classify(img, width, height)

When using GStreamer for, let's say, decoding a video encoded in H.264, it is
very likely that OpenMAX GStreamer plugins are used (i.e. omxh264dec). This
will create a video/x-raw(memory:NVMM) frame that is mapped into NVMM
memory (i.e. DMA buffer). Unfortunately, it's not straight forward to convert
NVMM memory to CUDA, that's why cudaFromGstSample expects the frame to be
stored in the host memory. To copy an NVMM mapped frame to host memory, your
pipeline needs to include the nvvidconv plugin which will convert an NVMM
mapped frame to a regular video/x-raw frame.

@blitzvb
Copy link

blitzvb commented Aug 13, 2019

nice!

do you have an example including the GstSample ? which package do you use for that on a nano?

thanks.

@dusty-nv
Copy link
Owner

Thanks for the contribution @aconchillo!

If you have the code of reading video using this (presumably using appsink), would like to merge it all.

@aconchillo
Copy link
Author

New examples PR dusty-nv/jetson-inference#389

@aconchillo
Copy link
Author

Added a new VideoLoader class. Not sure if I also need to added to the Jetson.Utils package, I don't know the difference.

aconchillo referenced this pull request in aconchillo/jetson-utils Aug 21, 2019
@aconchillo aconchillo force-pushed the add-pygst-support branch 2 times, most recently from 5c2bd5c to f6ef0ec Compare August 27, 2019 05:52
@aconchillo
Copy link
Author

There was an issue with nvv4l2decoder negotiating caps with videorate. This is now fixed and works with JetPack 4.2.1. nvv4l2decoder now has a higher ranking (therefore chosen) than omxh264dec.

@aconchillo
Copy link
Author

Renamed VideoLoader to VideoSource since I'm planning to add a VideoSink at some point.

This patch adds initial bindings support for GStreamer. It currently only
provides a function `cudaFromGstSample`.

`cudaFromGstSample` expects a GstSample as a single argument. The GstSample
needs to be in NV12 format and stored in host memory (this means there's no
zero-copy). The function will then copy the frame to CUDA memory and convert
it to RGBA (float4) directly in the GPU.

A usage example could be:

    def on_new_sample(appsink):
        sample = appsink.emit('pull-sample')

        img, width, height = jetson.utils.cudaFromGstSample(sample)

        # classify the image
        class_idx, confidence = net.Classify(img, width, height)

When using GStreamer for, let's say, decoding a video encoded in H.264, it is
very likely that OpenMAX GStreamer plugins are used (i.e. `omxh264dec`). This
will create a `video/x-raw(memory:NVMM)` frame that is mapped into NVMM
memory (i.e. DMA buffer). Unfortunately, it's not straight forward to convert
NVMM memory to CUDA, that's why `cudaFromGstSample` expects the frame to be
stored in the host memory. To copy an NVMM mapped frame to host memory, your
pipeline needs to include the `nvvidconv` plugin which will convert an NVMM
mapped frame to a regular `video/x-raw` frame.
@aconchillo
Copy link
Author

Added a VideoSink class useful to create video files. Also added GstBuffer to numpy array conversions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants