Gamma correctness with VSG #1291
Replies: 1 comment
-
Nice writeup about the issues.
For reference, here's what vsgCs does to try to approach gamma correctness:
I think this is the goal to shoot for in VSG. |
Beta Was this translation helpful? Give feedback.
-
There are several problems with colour management in the VSG and ancillary projects, and so I thought it would be sensible to have a thread about it.
Basic background
The numbers you see representing a colour on a computer usually don't directly represent the number of photons that come out of the monitor to make that colour. Originally, CRT screens were built so that increasing the signal voltage had a (as close as was feasible) perceptually uniform effect, but human vision isn't linearly sensitive to brightness. Early computers mapped the numbers for colours directly to voltage in the cable to the screen, and then when things turned digital, screens mapped the numbers to the brightness a CRT would give for those numbers. If you're trying to do lighting for a 3D scene to present as an image, you therefore need to know what the screen's going to do with the numbers it's given, and the colour space that tells you this for a typical screen is called sRGB. The maths can be approximated pretty closely with$C_{linear} = {C_{sRGB}}^{\frac{1}{2.2}}$ . If you ignore this, you get problems when adding light from different sources together $(C_1 + C_2)^{\frac{1}{2.2}} \neq {C_1}^{\frac{1}{2.2}} + {C_2}^{\frac{1}{2.2}}$ or attenuating light with distance $a\left(C^{\frac{1}{2.2}}\right) \neq (aC)^{\frac{1}{2.2}}$ , and if you mix things up and only sometimes do the conversion, or do it when it doesn't need to happen, colours end up brighter or darker than they should.
Some history
Around the early 90s, it was typical for realtime graphics to totally ignore gamma correctness, as what you could do on a contemporary machine would never leave it as the most noticeable problem, but by the end of the nineties, there needed to be extra tricks to get away with ignoring it. A common one was giving point lights linear attenuation instead of quadratic attenuation like they do in real life, as$\left(d^{-2}C\right)^{\frac{1}{2.2}} = d^{\frac{-1}{1.1}}\left(C\right)^{\frac{1}{2.2}} \approx d^{-1}\left(C^{\frac{1}{2.2}}\right)$ .
Eventually, GPUs started getting built-in support for sRGB conversions, and you could enable
GL_FRAMEBUFFER_SRGB
to have the fragment colour automatically converted before being written to the framebuffer with correct blending, and use sRGB variants of texture formats which would convert the colours from sRGB to linear when they were accessed (again, with correct filtering). You could then usually get all the convenience of ignoring gamma correctness with all the correctness.More recently, things have got more complicated, first with post-process shader effects which may or may not want to consume linear colour, then with tonemapping (sometimes an image will look better if you do the colour space correction incorrectly intentionally, as it can preserve details that would be lost due to a screen's inability to be perfectly dark or ludicrously bright or show small brightness differences which a human can see), and most recently with the wide availability of HDR monitors with non-sRGB preferred colour spaces.
Out of the box, OpenSceneGraph didn't do much about this - you could manually enable
GL_FRAMEBUFFER_SRGB
, and visit any loaded textures to switch them to the sRGB variant of their previous format, but the image loaders loaded things as the non-sRGB versions.VulkanSceneGraph does more, e.g. some of the loaders have options controlling whether images are assumed to be sRGB or not, and the built-in PBR shader does some colour space conversions. However, it's not perfect.
Current problems
This list might not be exhaustive, but it's what I've noticed so far.
Beta Was this translation helpful? Give feedback.
All reactions