-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate HDR image quality at lower bitrates #364
Comments
Initial investigations for trying linear domain error metrics, rather than LNS-domain error metrics:
Visually low-bitrate encodings improve quite a lot - manual inspection shows a significant reductions in block artifacts - so this looks like a reasonable change to make. However, a new problem that this introduces is that very bright texels in a block are prone to dominating the error metrics (e.g. HDR luminance over 40 vs LDR max of 1). The compressor will sacrifice a lot of quality in to improve the accuracy of the very bright texels in the block, often to the point that low brightness texels (relative) will quantize to black and the image gets unacceptable speckle artifacts. This isn't entirely unsurprising - this problem is why RMSLE (a logarithmic variant of RMSE) was invented for HDR data. However, the LNS encoding that ASTC uses is effectively a piece-wise logarithmic curve so I think we end up back where we started if we do this. (EDIT - yes, we do). |
Last I looked, mPSNR was definitely preferred to PSNR for evaluating HDR image quality, so I would be sorry to see us take a big hit there. |
Yes, agreed. At this point I'm mostly just experimenting to see what helps. The current compressor is pretty strong at 8bpp, but suffers horrible block artefacts at lower bitrates in regions with high max brightness, brightness variability, and complex patterns. It just runs out of bits, but seems to make coding choices that look particularly bad because of the obvious blocking it introduces. The experiments above make mPSNR worse, but the block artefacts mostly evaporate into a more dithered looking noise. For example - this is the PolyHaven Shanghai Bund image using 6x6 blocks: It may be that this isn't a generically solvable problem - HDR is a pain to encode at the best of times, and 3.56bpp isn't much to play with. The end result may be that lower bitrates are not viable for HDR, or require content constraints to cope with the loss of bitrate e.g. limiting max brightness. |
This type of block error seems to be a particular quirk of the LNS encoding, rather than the increased dynamic range in the image. Even with max brightness clamped to 1.0, the HDR image still suffers from severe block artifacts for 6x6 blocks, whereas the LDR equivalent compresses no problem. Also not a new problem - testing this back on 1.7, and that suffers the same issue. |
Cool image, thanks for sharing! I don't remember the details of LNS - it's that piecewise linear approximation to a log curve, right? I have to say, it looks like a fair number of blocks in the image are mapping to a single color, which feels like a bug. Unless the LNS color mode somehow constrains one of the color endpoints to be (0,0,0), or something? I have not read the spec since around 2015 or so... |
Yes, that's the one.
Yeah, something definitely smells with this. I just wish I knew what =) The code is generic and "works" for 8bpp, so if it is a bug it's a subtle one ... For the cases where there is actual high dynamic range I can sort of understand the problem from a conceptual point of view. The endpoints are far apart, and so you need very fine weight quantization to get granularity of texel weights along the luminance spectrum. I don't understand why LNS it's so bad when brightness is clamped < 1.0. That shouldn't be any worse than the LDR compression in terms of endpoint spacing, so that's the obvious initial line of investigation. |
Some observations from Rich G who has made HDR 6x6 play nicely with basisu: |
Notes/ideas:
|
Currently HDR images compress well at 8bpp, but suffer from block artifacts at lower bitrates.
We should investigate how we encode HDR images at lower bitrates to see if we can makes a material difference to HDR image quality. We should also investigate whether content limitations (e.g. clamping a max brightness) help.
(EDIT: Issue retitled and redescribed to cover the actual issue under investigation rather than a random part of it)
The text was updated successfully, but these errors were encountered: