Replies: 2 comments
-
I managed to do it for perspective sensors (need width/height scaling:
Is it possible for other sensors types? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @syllebra I think you're looking for this method: It won't do a visibility test for you, if you desire that. I think the closest example we have to something that uses this is the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, and congrats for this incredible work!
Mistuba is so easy to use that I tried to create synthetic datasets to train AI models.
For this, I need to "backproject" some point(s) defined in UV domain on a shape to the sensor domain (to create some image coordinates groundtruth data).
I found interface to get world point from a UV point using (python):
inter = bf.eval_parameterization(uv=mi.Point2f(u,v), ray_flags=mi.RayFlags.All, active=True)
I wonder if there exists
sensor.from_world_p(pw)
or something equivalent? I mainly use perspective projection and could recreate the perpective matrix maths, but it seemed that there must some code to do it, maybe in inverse rendering code?Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions