Optimize non-trivlal BSDF in photograph -> cg render? #446
-
I went through the current inverse rendering tutorials on the documentation. For me, the given use cases seem a bit trivial (using differentiable rendering on the exact same scene with just parameters tweaked) so I'm stuck on how to actually do something practically-useful like taking a real photograph of an object, and then a render with an equivalent digital scene/camera (with minimal delta between pixels, obviously) and ask it to optimize, say, the entire BSDF (the shading model, the albedo, the roughness, etc.). I already have the capability to do photogrammetry and extract camera parameters so the match should already be close. tl;dr is it possible to differentiate a photograph and a digital equivalent scene and do non-trivial optimization on an entire BSDF? If not, is it possible to at least do the above but isolating the optimization to albedo or another individual BSDF input? I am hoping that we can one day replace manual artist-driven scanning and PBR material validation with inverse rendering techniques as a foundation. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @Polyrhythm , This is a good question. Indeed the tutorials mostly focus on synthetic cases, where the ground truth is generated from the same scene. Applying such technics to real world data (e.g. photograph) comes with its own lot of challenges for sure. I would recommend you take a look at recent research papers (e.g. this one) where the authors apply inverse rendering algorithms to recover textured BSDFs data from real photographs. You will see that the base geometry is already recovered using other methods (e.g. photogrammetry). Please let us know if you have more specific questions about how to perform such optimizations with Mitsuba. |
Beta Was this translation helpful? Give feedback.
Hi @Polyrhythm ,
This is a good question. Indeed the tutorials mostly focus on synthetic cases, where the ground truth is generated from the same scene. Applying such technics to real world data (e.g. photograph) comes with its own lot of challenges for sure.
I would recommend you take a look at recent research papers (e.g. this one) where the authors apply inverse rendering algorithms to recover textured BSDFs data from real photographs. You will see that the base geometry is already recovered using other methods (e.g. photogrammetry).
Please let us know if you have more specific questions about how to perform such optimizations with Mitsuba.