-
Hi, I am using a colour chart to colour correct images, utilising the colour.characterisation.colour_correction_Cheung2004 function. The results I get are pretty good, but I do not completely understand the underlying theory, so maybe someone can help me with that :) I have read the article written by Cheung et al. and in the article he states that the transformation matrix A that has to be found is the matrix to transform your RGB values to the tristimulus values. In this transformation, he uses "augmented monitor values". What these augmented values look like depend on the amount of terms you choose to use (correct me if i'm wrong). What is the reason for not taking just the R, G and B value, but taking more terms to do the transform? In practice, i see that more terms generally gives better results, but i don't understand where these augmented values come from. Secondly, as far as i understand, in the colour_correction function a least squares fit (using the pseudo-inverse) between the reference RGB values and the augmented monitor values of the values you measured on your colour chart. This means that the colour corrected matrix that is computed is a best-fit for the transformation between the augmented monitor values and the reference RGB values. In Cheung's article, however, he describes that the transformation between the augmented monitor values and the tristimulus values is needed for the correction. I understand that going from the augmented values to the corrected RGB values is logical because that is what you want to correct an image as well, but i don't understand why the process still works, while a transformation to tristimulus values is never made. Hope somebody can clear this up for me! Cheers! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @JopdeBoo, Sorry for missing this one! Long story short, the more term you use and the more complex the polynomial used to fit the data will be. With 3 terms, the polynomial has degree 1 and basically represents a linear transformation. Thus to get a perfect correction, the two datasets you are fitting should only have a rotation and scaling difference because it is the only transformation that can be modeled by a linear transformation in that case. As you introduce more terms, the polynomial degrees increase and it allows correction for local distortions. Here is an image that illustrating what I just said: To your second question, there is no limit or constraint to what you fit to, you could very well transform from CIE XYZ to RGB or whatever using the various correction definitions. I hope that helps, Cheers, Thomas |
Beta Was this translation helpful? Give feedback.
-
Hi @KelSolaar, This explanation helped a lot! It also explained some weird results when using a high number of term (terms=22). Is there a way to know what number of terms is appropriate for your fit? Or is it just trial and error? Thanks for your help! Jop |
Beta Was this translation helpful? Give feedback.
Hi @JopdeBoo,
Sorry for missing this one! Long story short, the more term you use and the more complex the polynomial used to fit the data will be. With 3 terms, the polynomial has degree 1 and basically represents a linear transformation. Thus to get a perfect correction, the two datasets you are fitting should only have a rotation and scaling difference because it is the only transformation that can be modeled by a linear transformation in that case. As you introduce more terms, the polynomial degrees increase and it allows correction for local distortions.
Here is an image that illustrating what I just said:
To your second question, there is no limit or constraint to what you fit to, …