-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get lightmaps working in deferred rendering. #16836
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
A previous PR, bevyengine#14599, attempted to enable lightmaps in deferred mode, but it still used the `OpaqueNoLightmap3dBinKey`, which meant that it would be broken if multiple lightmaps were used. This commit fixes that issue, and allows bindless lightmaps to work with deferred rendering as well.
pcwalton
added
C-Bug
An unexpected or incorrect behavior
A-Rendering
Drawing game state to the screen
S-Needs-Review
Needs reviewer attention (from anyone!) to move forward
and removed
C-Bug
An unexpected or incorrect behavior
labels
Dec 16, 2024
JMS55
approved these changes
Dec 17, 2024
IceSentry
approved these changes
Dec 17, 2024
IceSentry
added
S-Ready-For-Final-Review
This PR has been approved by the community. It's ready for a maintainer to consider merging it
and removed
S-Needs-Review
Needs reviewer attention (from anyone!) to move forward
labels
Dec 17, 2024
github-merge-queue
bot
removed this pull request from the merge queue due to failed status checks
Dec 18, 2024
PR needs rebasing since the other lightmap changes were merged. |
alice-i-cecile
added
S-Waiting-on-Author
The author needs to make changes or address concerns before this can be merged
and removed
S-Ready-For-Final-Review
This PR has been approved by the community. It's ready for a maintainer to consider merging it
labels
Dec 19, 2024
Ok, this is updated for the bindless lightmap changes. I also added a |
pcwalton
added
S-Needs-Review
Needs reviewer attention (from anyone!) to move forward
and removed
S-Waiting-on-Author
The author needs to make changes or address concerns before this can be merged
labels
Dec 25, 2024
BenjaminBrienen
approved these changes
Dec 25, 2024
pcwalton
added
S-Ready-For-Final-Review
This PR has been approved by the community. It's ready for a maintainer to consider merging it
and removed
S-Needs-Review
Needs reviewer attention (from anyone!) to move forward
labels
Dec 25, 2024
pcwalton
added a commit
to pcwalton/bevy
that referenced
this pull request
Dec 30, 2024
`ExtractedView`s, not `ViewTarget`s. OK, so this is tricky. Every frame, `delete_old_work_item_buffers` deletes the mesh preprocessing index buffers (a.k.a. work item buffers) for views that don't have `ViewTarget`s. This was always wrong for shadow map views, as shadow maps only have `ExtractedView` components, not `ViewTarget`s. However, before bevyengine#16836, the problem was masked, because uploading the mesh preprocessing index buffers for shadow views had already completed by the time `delete_old_work_item_buffers` ran. But PR bevyengine#16836 moved `delete_old_work_item_buffers` from the `ManageViews` phase to `PrepareResources`, which runs before `write_batched_instance_buffers` uploads the work item buffers to the GPU. This itself isn't wrong, but it exposed the bug, because now it's possible for work item buffers to get deleted before they're uploaded in `write_batched_instance_buffers`. This is actually intermittent! It's possible for the old work item buffers to get deleted, and then *recreated* in `batch_and_prepare_binned_render_phase`, which runs during `PrepareResources` as well, and under that system ordering, there will be no problem other than a little inefficiency arising from recreating the buffers every frame. But, if `delete_old_work_item_buffers` runs *after* `batch_and_prepare_render_phase`, then the work item buffers corresponding to shadow views will get deleted, and then the shadows will disappear. The fact that this is racy is what made it look like bevyengine#16922 solved the issue. In fact, it didn't solve the issue: it just perturbed the ordering on the build bots enough that the issue stopped appearing. However, on my system, the shadows still don't appear with bevyengine#16922. This commit solves the problem by making `delete_old_work_item_buffers` look at `ExtractedView`s, not `ViewTarget`s, preventing work item buffers corresponding to live shadow map views from being deleted.
github-merge-queue bot
pushed a commit
that referenced
this pull request
Dec 30, 2024
…actedView`s, not `ViewTarget`s. (#17039) OK, so this is tricky. Every frame, `delete_old_work_item_buffers` deletes the mesh preprocessing index buffers (a.k.a. work item buffers) for views that don't have `ViewTarget`s. This was always wrong for shadow map views, as shadow maps only have `ExtractedView` components, not `ViewTarget`s. However, before #16836, the problem was masked, because uploading the mesh preprocessing index buffers for shadow views had already completed by the time `delete_old_work_item_buffers` ran. But PR #16836 moved `delete_old_work_item_buffers` from the `ManageViews` phase to `PrepareResources`, which runs before `write_batched_instance_buffers` uploads the work item buffers to the GPU. This itself isn't wrong, but it exposed the bug, because now it's possible for work item buffers to get deleted before they're uploaded in `write_batched_instance_buffers`. This is actually intermittent! It's possible for the old work item buffers to get deleted, and then *recreated* in `batch_and_prepare_binned_render_phase`, which runs during `PrepareResources` as well, and under that system ordering, there will be no problem other than a little inefficiency arising from recreating the buffers every frame. But, if `delete_old_work_item_buffers` runs *after* `batch_and_prepare_render_phase`, then the work item buffers corresponding to shadow views will get deleted, and then the shadows will disappear. The fact that this is racy is what made it look like #16922 solved the issue. In fact, it didn't: it just perturbed the ordering on the build bots enough that the issue stopped appearing. However, on my system, the shadows still don't appear with #16922. This commit solves the problem by making `delete_old_work_item_buffers` look at `ExtractedView`s, not `ViewTarget`s, preventing work item buffers corresponding to live shadow map views from being deleted.
github-merge-queue bot
pushed a commit
that referenced
this pull request
Jan 6, 2025
Currently, our batchable binned items are stored in a hash table that maps bin key, which includes the batch set key, to a list of entities. Multidraw is handled by sorting the bin keys and accumulating adjacent bins that can be multidrawn together (i.e. have the same batch set key) into multidraw commands during `batch_and_prepare_binned_render_phase`. This is reasonably efficient right now, but it will complicate future work to retain indirect draw parameters from frame to frame. Consider what must happen when we have retained indirect draw parameters and the application adds a bin (i.e. a new mesh) that shares a batch set key with some pre-existing meshes. (That is, the new mesh can be multidrawn with the pre-existing meshes.) To be maximally efficient, our goal in that scenario will be to update *only* the indirect draw parameters for the batch set (i.e. multidraw command) containing the mesh that was added, while leaving the others alone. That means that we have to quickly locate all the bins that belong to the batch set being modified. In the existing code, we would have to sort the list of bin keys so that bins that can be multidrawn together become adjacent to one another in the list. Then we would have to do a binary search through the sorted list to find the location of the bin that was just added. Next, we would have to widen our search to adjacent indexes that contain the same batch set, doing expensive comparisons against the batch set key every time. Finally, we would reallocate the indirect draw parameters and update the stored pointers to the indirect draw parameters that the bins store. By contrast, it'd be dramatically simpler if we simply changed the way bins are stored to first map from batch set key (i.e. multidraw command) to the bins (i.e. meshes) within that batch set key, and then from each individual bin to the mesh instances. That way, the scenario above in which we add a new mesh will be simpler to handle. First, we will look up the batch set key corresponding to that mesh in the outer map to find an inner map corresponding to the single multidraw command that will draw that batch set. We will know how many meshes the multidraw command is going to draw by the size of that inner map. Then we simply need to reallocate the indirect draw parameters and update the pointers to those parameters within the bins as necessary. There will be no need to do any binary search or expensive batch set key comparison: only a single hash lookup and an iteration over the inner map to update the pointers. This patch implements the above technique. Because we don't have retained bins yet, this PR provides no performance benefits. However, it opens the door to maximally efficient updates when only a small number of meshes change from frame to frame. The main churn that this patch causes is that the *batch set key* (which uniquely specifies a multidraw command) and *bin key* (which uniquely specifies a mesh *within* that multidraw command) are now separate, instead of the batch set key being embedded *within* the bin key. In order to isolate potential regressions, I think that at least #16890, #16836, and #16825 should land before this PR does. ## Migration Guide * The *batch set key* is now separate from the *bin key* in `BinnedPhaseItem`. The batch set key is used to collect multidrawable meshes together. If you aren't using the multidraw feature, you can safely set the batch set key to `()`.
ecoskey
pushed a commit
to ecoskey/bevy
that referenced
this pull request
Jan 6, 2025
A previous PR, bevyengine#14599, attempted to enable lightmaps in deferred mode, but it still used the `OpaqueNoLightmap3dBinKey`, which meant that it would be broken if multiple lightmaps were used. This commit fixes that issue, and allows bindless lightmaps to work with deferred rendering as well.
ecoskey
pushed a commit
to ecoskey/bevy
that referenced
this pull request
Jan 6, 2025
…actedView`s, not `ViewTarget`s. (bevyengine#17039) OK, so this is tricky. Every frame, `delete_old_work_item_buffers` deletes the mesh preprocessing index buffers (a.k.a. work item buffers) for views that don't have `ViewTarget`s. This was always wrong for shadow map views, as shadow maps only have `ExtractedView` components, not `ViewTarget`s. However, before bevyengine#16836, the problem was masked, because uploading the mesh preprocessing index buffers for shadow views had already completed by the time `delete_old_work_item_buffers` ran. But PR bevyengine#16836 moved `delete_old_work_item_buffers` from the `ManageViews` phase to `PrepareResources`, which runs before `write_batched_instance_buffers` uploads the work item buffers to the GPU. This itself isn't wrong, but it exposed the bug, because now it's possible for work item buffers to get deleted before they're uploaded in `write_batched_instance_buffers`. This is actually intermittent! It's possible for the old work item buffers to get deleted, and then *recreated* in `batch_and_prepare_binned_render_phase`, which runs during `PrepareResources` as well, and under that system ordering, there will be no problem other than a little inefficiency arising from recreating the buffers every frame. But, if `delete_old_work_item_buffers` runs *after* `batch_and_prepare_render_phase`, then the work item buffers corresponding to shadow views will get deleted, and then the shadows will disappear. The fact that this is racy is what made it look like bevyengine#16922 solved the issue. In fact, it didn't: it just perturbed the ordering on the build bots enough that the issue stopped appearing. However, on my system, the shadows still don't appear with bevyengine#16922. This commit solves the problem by making `delete_old_work_item_buffers` look at `ExtractedView`s, not `ViewTarget`s, preventing work item buffers corresponding to live shadow map views from being deleted.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-Rendering
Drawing game state to the screen
C-Bug
An unexpected or incorrect behavior
S-Ready-For-Final-Review
This PR has been approved by the community. It's ready for a maintainer to consider merging it
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A previous PR, #14599, attempted to enable lightmaps in deferred mode, but it still used the
OpaqueNoLightmap3dBinKey
, which meant that it would be broken if multiple lightmaps were used. This commit fixes that issue, and allows bindless lightmaps to work with deferred rendering as well.