-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Realtime inference #11
Comments
Hi, yes it is possible but it might require quite some change in the code. Our method has an off-line global optimization to align the depth snippets. You can modify this to only optimize the scale and shift for new snippets when treating the previously aligned sequence as a long snippet with a fixed scale and shift. |
Thanks for your response! Does this mean though that you need all prior frames to have already been generated? I should have been more clear - I'm looking to be able to render the depth accurately at any point along the video; so even if you start rendering in the middle of the clip and then later go back and render from the start, it will line up. It sounds like what you're saying is even with the modification, it will need to start rendering from the start, is that right? |
Technically this is possible -- the co-alignment is flexible, you can fix the scale and shift parameters for the "old" sub-sequence, and align new snippets to it. |
Hi there,
First of all, great work. The results really are amazing.
I'm struggling to understand the method and I'd appreciate some help.
Is it possible to modify this code so that depth can be generated on an ongoing basis (of course with a 2 frame delay)? Or does each frame end up using depth information from all other frames in the video so that this wouldn't be possible?
I am trying to make a program that takes in one frame at a time and outputs temporally consistent depth.
Thank you!
The text was updated successfully, but these errors were encountered: