Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage with cudaHostAlloc #107

Closed
htngr opened this issue Aug 21, 2024 · 3 comments
Closed

Usage with cudaHostAlloc #107

htngr opened this issue Aug 21, 2024 · 3 comments

Comments

@htngr
Copy link

htngr commented Aug 21, 2024

I'm trying to use this library on a Nvidia jetson nano. My goal is to go directly from v4l2 to the jetson's GPU without going through the CPU (zero-copy). There are official samples from nvidia which accomplishes this with a Userptr to GPU / shared memory (allocated via cudaHostAlloc). How can I use this crate for this purpose? I couldn't find a way to pass a pointer to the UserptrStream.
An alternative could be DMA but it seems to me that this isn't implemented yet?

@molysgaard
Copy link

I have done exactly the same. See this for reference: #103
This is another sample point that allowing the user of the library to specify how memory is allocated should be supported :)

@htngr
Copy link
Author

htngr commented Aug 22, 2024

I have done exactly the same. See this for reference: #103 This is another sample point that allowing the user of the library to specify how memory is allocated should be supported :)

Thanks, this seems to work (after writing a custom allocator)

@raymanfx
Copy link
Owner

raymanfx commented Sep 1, 2024

dup: #103

@raymanfx raymanfx closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants