-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better color vision deficiency simulation #43
Comments
I’ll take a look at the papers on dichromasy that I have collected. What happens if you use another difference metric? |
That's very interesting. So it's possible that it's not quite as bad as I thought. If that's the case, for the purpose of choosing colors I could use something like |
I guess that could be a good starting point. Maybe using even more sophisticated color spaces like the DIN versions etc. could lead to more regular palettes. I’m about to implement the IPT color space, which is often used in image and color perception algorithms. It seems to have excellent properties especially concerning hue shifts (unlike CIELAB which performs quite badly in the blue region) and color difference evaluation. |
I found a rather new paper (from 2010) on color deficiency simulation. In contrast to Brettel et al’s algorithm (from 1997) it uses the physiologically based two-stage model of color vision to simulate protanopia, deuteranopia, tritanopia plus anomalous trichromacy (protanomaly, deuteranomaly and trinanomaly) to varying degrees of severity. The paper also contains a real-time contrast enhancement algorithm for dichromats that is temporally coherent. The title is: A Model for Simulation of Color Vision Deficiency and A Color Contrast Enhancement Technique for Dichromats. Here is the link to the pdf file: http://www.lume.ufrgs.br/bitstream/handle/10183/26950/000761444.pdf?sequence=1 Looks really promising to me. |
I tried your difference matrix with the DIN99o color space instead and the range of difference is considerably smaller, between +10 and -50 instead of +20 and -80. Basically the only regions that are interesting for color scales are the first row to know the difference between neighboring colors in the scale and the change of difference relations between the red and green hues. I hope my interpretation of the distance matrix is correct. I used something like
for the color scales. Different chroma leads to greater differences (at other locations) which appears normal to me. There is another interesting difference if you use DIN99o instead of LCHuv. The locations of highest difference shift towards different regions in the color scale |
This is slightly tangential, but during the Q&A after @dcjones's talk at JuliaCon, I suggested using constrained optimization solvers to design color-blindness-friendly palettes, which got a good laugh from the crowd. But I'm not so sure that it's ridiculous. There are various problems with optimizing for maximal distinguishability, the biggest of which may be that maximally distinguishable palettes with respect to any vision model tend to be quite garish and unpleasant. There's also the problem of which vision model to optimize for – if you optimize for just one, then you may make poor choices with respect to another. I believe you tried optimizing for the minimum of all the vision model distances but found that didn't work very well, probably because some of the models contradict each other. But we don't really need or even want maximally distinguishable colors – we just need colors that are sufficiently distinguishable for everyone. So a better approach seems like it would be to maximize the color palette for pleasantness (by some measure – that's another issue altogether), within the constraint that it be sufficiently distinguishable under all vision models. Now we just need a measure of palette "pleasantness". I bet there's research on that too ;-) |
I guess there are two ways to frame the problem:
(2) is what you're proposing, and (1) is sort of what I'm doing now to choose colors in Gadfly, except the “pleasantness” constraint is just limiting the number chroma and lightness values that can be selected from. I'm honestly not sure whether (1) or (2) would be better, but either approach would benefit from being able to quantify pleasantness. The problem is that perception of pleasantness is going to be all over the map. For example, we could maximize similarity to Wes Anderson palettes, which some people would love and others think is hipster bullshit. But maybe what we could just define “pleasantness” as “evenly-spaced” in some way. |
I don't find Gadfly's colors garish, and last I checked they're chosen by the maximize-distinguishability critierion. Just, as @dcjones says, in a constrained space. |
Yes, the current colors are quite nice. I didn't actually realize that they were still being chosen to maximize distinguishability – I thought that had been given up on as the default color choice mechanism. |
I think the current colors are ok. The main problem is that Here's what I mean, to be concrete: http://nbviewer.ipython.org/gist/dcjones/416410b02476aa010c23 |
@StefanKarpinski The paper @dcjones is talking about also contains a recolorization/color deficiency friendly palette algorithm. Edit: |
I'd be interested to hear what you think about Machado, Oliveira, & Fernandes (2009), available here. Their model is incredibly simple to implement (they provide precomputed 3x3 matrices that send RGB -> simulated anomalous RGB), and they model both dichromats and anomalous trichromats. (Brettel only models the former, even though the latter make up ~70% of the CVD population.) Two things I wonder about: (1) to compute the matrices they provide, they needed some estimate of the spectral distribution of their RGB primaries, and I have no idea what they used for this (colorimetric XYZ coordinates aren't enough, you actually need the full spectrum... which of course will vary even between calibrated sRGB monitors...), and (2) as far as I can tell from their math, these matrices should be applied in a linear-light space (so gamma-corrected RGB), but they're also pretty explicit in the paper that they just apply them directly in regular sRGB space ("The simulation operator...only requires one matrix operation per pixel"). Not sure what's up with that. |
@njsmith, this is the method I mentioned above. They provide spectra for both CRT and LED monitors in one of their papers. Their method would be “perfect” if you had the spectra of the display models you are simulating the color vision deficiency on, but apparently it’s exact enough to use the exemplary spectra they provide in one of their papers. I started working on that algorithm but was busy with something else... Here is a more complete version of the paper, including the primary spectra: I found a paper (Neitz & Neitz) on the spectral shift of the cones and the corresponding genetic variations. So you could even combine this simulation model with actual values for natural shifts of the peak wavelength of the cone receptors. Personalized Simulation of Color Vision Deficiency seems to be an interesting idea, too: |
Hi, I'm currently working on Machado (2010) CVD model (https://github.com/colour-science/colour/tree/feature/cvd/colour/blindness) and have successfully managed to compute the Protanomaly and Deuteranomaly pre-computed matrices. Here are the matrices I compute: Shift: 5nm
[[ 0.92667567 0.09249219 -0.01916786]
[ 0.02118844 0.96450879 0.01430277]
[ 0.00843841 0.05481969 0.93674191]]
Shift: 59nm
[[ 1.25474504 -0.07619839 -0.17854666]
[-0.0781726 0.93065023 0.14752237]
[ 0.00473967 0.69127251 0.30398782]] and the ones from the paper: 0.1: np.array(
[[0.926670, 0.092514, -0.019184],
[0.021191, 0.964503, 0.014306],
[0.008437, 0.054813, 0.936750]]),
1.0: np.array(
[[1.255528, -0.076749, -0.178779],
[-0.078411, 0.930809, 0.147602],
[0.004733, 0.691367, 0.303900]]) I think the issue comes from the fact that the Smith & Pokorny (1975) Normal Trichromats CMFS have 5nm bins and thus ∆λS doesn't represent 1nm step but 5nm steps. I sent a mail to Gustavo Machado about it and also to confirm if the matrices are meant to be applied on linear values (I'm assuming it is the case). I'll keep you posted :) |
Use the CIE 2006 CMFs instead. They are partly based on Smith/Pokorny, but they are available in 1nm steps. And these matrices are generally meant to be used for linear spaces, as far as I know. |
I originally used 2006 Cone Fundamentals and will use them in the future. I wanted to validate my computations hence the use of Smith & Pokorny (1975) CMFS. |
We have an implementation of what seems to be the most popular dichromat simulation, but I've come to think that it's a pretty bad simulation. Basically, it increases the distance between some colors, sometimes by quite a lot, which can cause
distinguishable_colors
to do a sub-optimal job. Here's a demonstration:http://nbviewer.ipython.org/gist/dcjones/a81401479be97230d1e7
If someone knows of a better approach, please let me know!
The text was updated successfully, but these errors were encountered: