Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better color vision deficiency simulation #43

Open
dcjones opened this issue Jul 10, 2014 · 16 comments
Open

Better color vision deficiency simulation #43

dcjones opened this issue Jul 10, 2014 · 16 comments

Comments

@dcjones
Copy link
Contributor

dcjones commented Jul 10, 2014

We have an implementation of what seems to be the most popular dichromat simulation, but I've come to think that it's a pretty bad simulation. Basically, it increases the distance between some colors, sometimes by quite a lot, which can cause distinguishable_colors to do a sub-optimal job. Here's a demonstration:

http://nbviewer.ipython.org/gist/dcjones/a81401479be97230d1e7

If someone knows of a better approach, please let me know!

@m-lohmann
Copy link
Contributor

I’ll take a look at the papers on dichromasy that I have collected.
Actually some dichromats see different color distances. For example they can be better at spotting camouflage than color normal trichromats.
I remember a paper I found a while ago where it’s mentioned that the proposed algorithm supposedly worked quite well, as confirmed by protanope and deuteranope test subjects that had to compare the color prediction with their own perception. I’ll let you know when I find it.

What happens if you use another difference metric?
Or a more uniform color space to create your normal color palette?

@dcjones
Copy link
Contributor Author

dcjones commented Jul 11, 2014

That's very interesting. So it's possible that it's not quite as bad as I thought. If that's the case, for the purpose of choosing colors I could use something like
min(colordiff(a, b), colordiff(deuteranopic(a), deuteranopic(b)))

@m-lohmann
Copy link
Contributor

I guess that could be a good starting point. Maybe using even more sophisticated color spaces like the DIN versions etc. could lead to more regular palettes. I’m about to implement the IPT color space, which is often used in image and color perception algorithms. It seems to have excellent properties especially concerning hue shifts (unlike CIELAB which performs quite badly in the blue region) and color difference evaluation.
I searched the internet for more papers and information on dichromasy. Apparently it’s even possible to devise a complementary test to Ishihara plates where dichromats see color differences but normal trichromats don’t. Dichromats often see brightness differences where trichromats don’t. I guess that could be a partial explanation why the resulting palettes seem irregular for the simulated color blindness.

@m-lohmann
Copy link
Contributor

I found a rather new paper (from 2010) on color deficiency simulation. In contrast to Brettel et al’s algorithm (from 1997) it uses the physiologically based two-stage model of color vision to simulate protanopia, deuteranopia, tritanopia plus anomalous trichromacy (protanomaly, deuteranomaly and trinanomaly) to varying degrees of severity. The paper also contains a real-time contrast enhancement algorithm for dichromats that is temporally coherent.

The title is: A Model for Simulation of Color Vision Deficiency and A Color Contrast Enhancement Technique for Dichromats.

Here is the link to the pdf file: http://www.lume.ufrgs.br/bitstream/handle/10183/26950/000761444.pdf?sequence=1

Looks really promising to me.

@m-lohmann
Copy link
Contributor

I tried your difference matrix with the DIN99o color space instead and the range of difference is considerably smaller, between +10 and -50 instead of +20 and -80.
The range of differences also strongly depends on the chroma/saturation of the color samples used in the calculation.

Basically the only regions that are interesting for color scales are the first row to know the difference between neighboring colors in the scale and the change of difference relations between the red and green hues.
The difference in the first row is the largest for the green and red/purple hues which is not surprising because both are shifted towards ochre/yellowish hues. Apparently the difference is higher for the red hues. And the spots in the upper right and lower left corners (e.g. around column 10) just show that the distance change is larger for trichromats than for dichromats because red and green hues both shift towards the yellowish regime for dichromats which shrinks the perceived color distance between red/purple and green hues for dichromats.

I hope my interpretation of the distance matrix is correct.

I used something like

n = 25;L=70;C=30
cs = [DIN99o(L, C*cosd(h),C*sind(h)) for h in linspace(0, 360, n)]

for the color scales. Different chroma leads to greater differences (at other locations) which appears normal to me.

There is another interesting difference if you use DIN99o instead of LCHuv. The locations of highest difference shift towards different regions in the color scale

@StefanKarpinski
Copy link
Contributor

This is slightly tangential, but during the Q&A after @dcjones's talk at JuliaCon, I suggested using constrained optimization solvers to design color-blindness-friendly palettes, which got a good laugh from the crowd. But I'm not so sure that it's ridiculous. There are various problems with optimizing for maximal distinguishability, the biggest of which may be that maximally distinguishable palettes with respect to any vision model tend to be quite garish and unpleasant. There's also the problem of which vision model to optimize for – if you optimize for just one, then you may make poor choices with respect to another. I believe you tried optimizing for the minimum of all the vision model distances but found that didn't work very well, probably because some of the models contradict each other. But we don't really need or even want maximally distinguishable colors – we just need colors that are sufficiently distinguishable for everyone. So a better approach seems like it would be to maximize the color palette for pleasantness (by some measure – that's another issue altogether), within the constraint that it be sufficiently distinguishable under all vision models. Now we just need a measure of palette "pleasantness". I bet there's research on that too ;-)

@dcjones
Copy link
Contributor Author

dcjones commented Jul 19, 2014

I guess there are two ways to frame the problem:

  1. maximize distinguishability with constraints on pleasantness
  2. maximize pleasantness with constraints on distinguishability

(2) is what you're proposing, and (1) is sort of what I'm doing now to choose colors in Gadfly, except the “pleasantness” constraint is just limiting the number chroma and lightness values that can be selected from. I'm honestly not sure whether (1) or (2) would be better, but either approach would benefit from being able to quantify pleasantness.

The problem is that perception of pleasantness is going to be all over the map. For example, we could maximize similarity to Wes Anderson palettes, which some people would love and others think is hipster bullshit. But maybe what we could just define “pleasantness” as “evenly-spaced” in some way.

@timholy
Copy link
Contributor

timholy commented Jul 19, 2014

I don't find Gadfly's colors garish, and last I checked they're chosen by the maximize-distinguishability critierion. Just, as @dcjones says, in a constrained space.

@StefanKarpinski
Copy link
Contributor

Yes, the current colors are quite nice. I didn't actually realize that they were still being chosen to maximize distinguishability – I thought that had been given up on as the default color choice mechanism.

@dcjones
Copy link
Contributor Author

dcjones commented Jul 19, 2014

I think the current colors are ok. The main problem is that distinguishable_colors become really finicky with color blindness simulation, which I think is more a problem with the simulation than distinguishable_colors, hence this issue.

Here's what I mean, to be concrete: http://nbviewer.ipython.org/gist/dcjones/416410b02476aa010c23

@m-lohmann
Copy link
Contributor

@StefanKarpinski The paper @dcjones is talking about also contains a recolorization/color deficiency friendly palette algorithm.
I find the daltonize algorithm from vischeck quite effective and I have the impression that the algorithm in the above mentioned paper leads to comparable or maybe even better results.

Edit:
@dcjones You might even consider using the 2006 CMFs for the color deficiency simulation instead of using the Judd-Vos modifications because the new CMFs are based on a larger sample of observers and are more precise. I read several of Henning’s, Stockman’s and Sharpe’s papers on defining better CMFs and a lot of effort went into improving the data, like considering the genotype of observers and so on.

@njsmith
Copy link

njsmith commented Jun 11, 2015

I'd be interested to hear what you think about Machado, Oliveira, & Fernandes (2009), available here. Their model is incredibly simple to implement (they provide precomputed 3x3 matrices that send RGB -> simulated anomalous RGB), and they model both dichromats and anomalous trichromats. (Brettel only models the former, even though the latter make up ~70% of the CVD population.)

Two things I wonder about: (1) to compute the matrices they provide, they needed some estimate of the spectral distribution of their RGB primaries, and I have no idea what they used for this (colorimetric XYZ coordinates aren't enough, you actually need the full spectrum... which of course will vary even between calibrated sRGB monitors...), and (2) as far as I can tell from their math, these matrices should be applied in a linear-light space (so gamma-corrected RGB), but they're also pretty explicit in the paper that they just apply them directly in regular sRGB space ("The simulation operator...only requires one matrix operation per pixel"). Not sure what's up with that.

@m-lohmann
Copy link
Contributor

@njsmith, this is the method I mentioned above. They provide spectra for both CRT and LED monitors in one of their papers. Their method would be “perfect” if you had the spectra of the display models you are simulating the color vision deficiency on, but apparently it’s exact enough to use the exemplary spectra they provide in one of their papers.
See APPENDIX C SIMULATION OF CVD USING SPD OF AN LCD in their paper

I started working on that algorithm but was busy with something else...
As far as I remember I drew the conclusion that the whole thing would completely work in linear-light space, just because they project the (linear) RGB primaries into the linear Ingling-Tsou opponent color space, which is derived from the linear LMS cone space.

Here is a more complete version of the paper, including the primary spectra:
http://www.lume.ufrgs.br/bitstream/handle/10183/26950/000761444.pdf?sequence=1

I found a paper (Neitz & Neitz) on the spectral shift of the cones and the corresponding genetic variations. So you could even combine this simulation model with actual values for natural shifts of the peak wavelength of the cone receptors.

Personalized Simulation of Color Vision Deficiency seems to be an interesting idea, too:
http://hci.usask.ca/uploads/311-GRAND_Simulation.2.pdf

@KelSolaar
Copy link

Hi,

I'm currently working on Machado (2010) CVD model (https://github.com/colour-science/colour/tree/feature/cvd/colour/blindness) and have successfully managed to compute the Protanomaly and Deuteranomaly pre-computed matrices.
I have however issues with the Tritanomaly ones, it seems like ∆λS is not in domain [0-20]nm but more something like [5-59]nm.

Here are the matrices I compute:

Shift: 5nm
[[ 0.92667567  0.09249219 -0.01916786]
 [ 0.02118844  0.96450879  0.01430277]
 [ 0.00843841  0.05481969  0.93674191]]

Shift: 59nm
[[ 1.25474504 -0.07619839 -0.17854666]
 [-0.0781726   0.93065023  0.14752237]
 [ 0.00473967  0.69127251  0.30398782]]

and the ones from the paper:

0.1: np.array(
    [[0.926670, 0.092514, -0.019184],
     [0.021191, 0.964503, 0.014306],
     [0.008437, 0.054813, 0.936750]]),
1.0: np.array(
    [[1.255528, -0.076749, -0.178779],
     [-0.078411, 0.930809, 0.147602],
     [0.004733, 0.691367, 0.303900]])

I think the issue comes from the fact that the Smith & Pokorny (1975) Normal Trichromats CMFS have 5nm bins and thus ∆λS doesn't represent 1nm step but 5nm steps.

I sent a mail to Gustavo Machado about it and also to confirm if the matrices are meant to be applied on linear values (I'm assuming it is the case).

I'll keep you posted :)

@m-lohmann
Copy link
Contributor

Use the CIE 2006 CMFs instead. They are partly based on Smith/Pokorny, but they are available in 1nm steps. And these matrices are generally meant to be used for linear spaces, as far as I know.

@KelSolaar
Copy link

I originally used 2006 Cone Fundamentals and will use them in the future. I wanted to validate my computations hence the use of Smith & Pokorny (1975) CMFS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants