You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we record all events and then post-process, filter and use them on the CPU. Which is costly both in memory & performance.
An EventHandler would be an approach that does all these things as last step in the shader pipeline.
The idea is that whenever we call recordEvent in the shader, instead of storing the event to memory we'd call an event handler to do "something" with the given event.
Hence, an EventHandler would look conceptually as follows:
The default EventHandler might store all events to some buffer (which is the behaviour that the shader currently has).
But, we might implement other EventHandlers for the following use-cases:
storing only some parts of each Event
eg. only position & direction (when visualizing we only need these two)
this would reduce our memory requirements by a factor of 3
not storing the first N or last N events
only storing the events that hit element X, Y or Z
return only a randomized sample of N rays (useful for rayx-ui)
generating footprints in the shader
i.e. this EventHandler would not store the events at all, but just add a point in the footprint whenever a new event was given to it
This would be tremendously more memory efficient when you only intend to visualize the output
... basically every thing we might want to do with the Events could (I'm not saying should ^^) be an EventHandler
Implementation Thoughts
If we stay in Vulkan/GLSL this might be pretty awful.
In order to represent multiple types of EventHandlers we'd need to implement another Serialization system (à la Cutout, Surface, Behaviour, ...) which itself isn't too bad.
But the problem comes from the fact that different EventHandlers will return different types of data back to the CPU.
(some return rays, some footprints etc..)
Either we return all of these things as big chunks of doubles, or we'd need to fight quite a bit with Vulkan to get buffers for each type of data ready to be used.
This would probably be simpler with SYCL/CUDA where we could define EventHandler::OutputType for each EventHandler which might be a more principled solution.
Questions
Do you think this is over-engineered? Might be a fair criticism.
I just have the feeling that there are quite a few problems that might be solved by this.
The text was updated successfully, but these errors were encountered:
As nice as it may be to have a specific EventHandler for every possible use case, this sounds like it might be cumbersome to implement (in Vulkan/GLSL). Maybe a compromise is better? Have various EventHandlers for different situations, but they all return rays. For use cases that ultimately require something else, there is still some post-processing, but the amount of data in the pipeline is (in some cases) reduced.
In terms of use-cases: Everything expressible with an EventHandler can also be expressed using post-processing by the CPU;
so it doesn't allow us to do new things.
The question is solely about performance & memory consumption. Our current system has a drastic bottleneck, because we export & store all data that we might need, whereas for each concrete problem we only require a certain subset of that data.
Currently, we record all events and then post-process, filter and use them on the CPU. Which is costly both in memory & performance.
An EventHandler would be an approach that does all these things as last step in the shader pipeline.
The idea is that whenever we call
recordEvent
in the shader, instead of storing the event to memory we'd call an event handler to do "something" with the given event.Hence, an EventHandler would look conceptually as follows:
The default EventHandler might store all events to some buffer (which is the behaviour that the shader currently has).
But, we might implement other EventHandlers for the following use-cases:
Implementation Thoughts
If we stay in Vulkan/GLSL this might be pretty awful.
In order to represent multiple types of EventHandlers we'd need to implement another Serialization system (à la Cutout, Surface, Behaviour, ...) which itself isn't too bad.
But the problem comes from the fact that different EventHandlers will return different types of data back to the CPU.
(some return rays, some footprints etc..)
Either we return all of these things as big chunks of
double
s, or we'd need to fight quite a bit with Vulkan to get buffers for each type of data ready to be used.This would probably be simpler with SYCL/CUDA where we could define
EventHandler::OutputType
for each EventHandler which might be a more principled solution.Questions
Do you think this is over-engineered? Might be a fair criticism.
I just have the feeling that there are quite a few problems that might be solved by this.
The text was updated successfully, but these errors were encountered: