Replies: 4 comments 1 reply
-
Sounds good to me, for uncompressed. For compressed there’s not much to do until the asset pipeline is fleshed out and it really feels like an offline thing to do, plus caching so that on the next run if the source files are the same, the compressed and mipmapped cached texture is the same. Another aspect is that for static textures (ie ones that are not runtime-modified) it would be good if once the data has been uploaded to vram, the texture data in the vec were emptied out to free up significant system ram. |
Beta Was this translation helpful? Give feedback.
-
I'm curious if anyone has done any more work on this. Mipmaps are a pretty substantial visual improvement if implemented, and a kind of nasty artifact if not. The renderer and asset pipelines seem to have improved a decent amount in the last year, are they ready for something like this? At the very least, I think we could add mipmap generation without too much work. Texture data sits on the GPU and we can run a simple blit on that without modifying |
Beta Was this translation helpful? Give feedback.
-
There's this Rust crate someone else has made that does runtime mipmap generation with current Bevy; might be a neat thing to look into. At least it's useful to someone else like me in the future trying to find a way around having to pregenerate them while Bevy doesn't natively have such a feature lol |
Beta Was this translation helpful? Give feedback.
-
Related: The preprocessor in the Bevy Asset V2 PR generates mipmaps for images processed into basis universal (using the basis-universal crate). |
Beta Was this translation helpful? Give feedback.
-
Just creating this post as a brain-dump of the ideas I had, as I was planning to implement mipmapping for textures in Bevy, but I have no motivation to continue working on it right now (if anyone else wants to do it, feel free).
First, for context, I had previously made a PR for the old renderer, back in the 0.4 days: #1685 You could have a look at that for some prior work, though ofc the new renderer's architecture is different.
That PR also introduced a nice new example to Bevy, to demo different kinds of texture filtering. It procedurally-generates a texture with a julia fractal (code for that was copied from
wgpu
's mipmap example) and allowed switching between nearest/linear, with/without mipmaps, and anisotropic filtering, to showcase the difference in visual quality. :)Feel free to port over that example and add it to bevy. :)
Now, for the new plan. My plan was to tackle this work in two stages: 1) using mipmaps for rendering if they are available, 2) implementing auto-generation of missing mipmaps, when desired. I will describe my ideas for both, here.
Part (1) is relatively-straightforward. Just allow storing multiple mip levels in the
Image
asset, and if they are present, make sure to copy them all to the GPU. The old PR I linked above did this.My idea was to replace the
data: Vec<u8>
in the texture asset struct with adata: Vec<Vec<u8>>
. This would allow the user (or asset loader) to provide pixel data for each mipmap data that they have. This representation also makes it easy to access the data for a specific mip level, without having to do arithmetic. There is also a separatemip_count
field. For now (without Part (2)), the actual number of mipmaps to be loaded to the GPU is the minimum ofmip_count
anddata.len()
.Alternative storage idea: keep it as a single array:
data: Vec<u8>
and concatenate the mipmaps. This is the way it is done in common GPU texture formats like DDS, and makes it easy to deal with all the data as a single blob, but requires arithmetic to find the start of each mip level, if it is desired to access the data of a specific mipmap.Then, it is simply a matter of copying each mip level into GPU memory and setting the mip count correctly on the GPU texture, and that's it; it should be used for rendering!
Part (2) is a lot more controversial, as there are many ways it could be done.
My old PR simply provided a method on the texture asset type, which just called into the
image
crate to use a downscaling algorithm on the CPU. This is slow and I actually wouldn't want things to be done this way in Bevy.Now i will describe my new idea, that I was planning to experiment with.
I wanted to create a new
MipmapGenerate
render graph node, which would be inserted before the main pass.It would contain a queue, containing info about any mipmaps that need to be generated (texture + source mip level + destination mip level). Whenever any other part of the renderer desires mipmaps to be generated for any texture (doesn't matter if it is from an asset, or some internal texture used inside the renderer, like a reflection map), it would add info into the queue.
The render graph node would create a simple render pipeline with the appropriate texture binding and a fragment shader that samples from the source mip level and writes to the destination mip level. In the future, we could experiment with fancier downscaling algorithms for better quality, in this shader.
To reduce the gpu workload, the queue would be sorted by texture and mip level, to avoid unnecessary texture re-bindings, and to be able to generate smaller mip levels from previously-generated larger ones, if multiple mipmaps need to be generated for the same texture.
For textures that come from image assets, let's assume that (Part (1)) is done as I described previously, with the
data: Vec<Vec<u8>>
andmip_count
in the asset type. We can treat themip_count
field as the desired final number of mipmaps that should be in GPU memory. We look at thedata: Vec<Vec<u8>>
in the asset, and:This way is the most flexible. It would even allow the user to do funky things, for example: procedurally-generate a texture, but only fill, say, the data for levels 0, 3, 4, and set
mip_count
to 6. Bevy would see that levels 1,2,5 are missing. Bevy would copy the data for levels 0, 3, 4, that is provided by the user, into the GPU, and queue up mipmap generation for levels 0->1, 1->2, 4->5.Beta Was this translation helpful? Give feedback.
All reactions