Random And Groundless Thoughts On Color Control In a Raw Convertor
After finally finishing reading Fairchild s
In the photographical community it is pretty much a common place that if you show the viewer two pictures, one with normal colors and one with an increased saturation, the viewer will in most of the cases pick the one with the higher saturation (if it is, for example a landscape scene) as the more natural one (of course if saturation is increased in the reasonable limits).
I could not quickly find the wording of this effect in books by Margulis, although I was almost certain that it was there in one way or another: in his book
In the earlier years, I thought the above-mentioned anecdote to be a sort of a mnemonic rule, but it turned out to be the Hunt Effect, which was quantitatively studied more than 50 years ago. The main essence of the effect is that as you increase the luminance, the perceived color saturation increases too. Therefore, since we are studying photos under a much dimmer light than the one present in the original scene (at least for landscape shots during the daylight), this difference needs to be compensated.
It is the same case with the brightness contrast: for the majority of street scenes, there is an empirical desire to increase the brightness. This effect was measured by Stevens and Stevens 45 years ago, and the results can be quantitatively used. The reason why you need to increase the contrast of the printouts is the same: you look at them at a much dimmer and duller light than the one that was present during the shooting and presumed by the viewer.
Toying With The Surroundings
Other effects that were only known to me empirically turned out to have been measured. For example the dependence of the visual contrast on the surroundings (the Bartleson-Breneman effect, quantitatively studied in 1967-75): if we study an image on a dark background then the visual contrast decreases. Thus, slides with an extremely high (relative to the original scene) contrast appear normal on the screen because we observe them in a darkened room.
The majority of the above-mentioned effects can be replicated well in some modern models of perception, for example in CIECAM02. Obviously such models have more input parameters than those conventional XYZ and Lab representations have. Besides relative color coordinates of colors (as well as the parameters of the illuminants - if we compute the adaptation), they require the characteristics of the surroundings and data about the real brightness in the scene. Usually such data simply does not exist, and if it does exist, it is not registered in the metadata of the image.
What Can Happen In Real Life: CIECAM02 In A Raw Converter
In real life, the brightness of the initial scene can be extracted with the necessary precision from EXIF data. We know the exposure, the ISO setting, and thus we can reconstruct the brightness of each point in the picture. Of course it only works in a simple scenario. If filters and flashes were used you can be off. But in many cases such a calculation will still be acceptably correct.
Based on the initial brightness of the scene, and the standard viewing conditions as recommended by the ISO standard 3664:2000, the convertor can suggest starting positions of the contrast and saturation sliders, or the starting curves for the same parameters.
In a number of cases already on the raw conversion stage we know what will be the viewing conditions of the result such as a print, screen projection, or monitor viewing. Therefore we will know how to tweak the contrast, and the black point for the printout. It is very sensible to have such an option in the converter even if as the starting point.
Postponing these tweaks till the further steps of the processing pipeline, until after the raw conversion is accomplished for example, seems dubious since we will be essentially counteracting some of the tweaks that were already applied at raw conversion stage. Another issue is that currently there is no standard agreement or mechanism to pass down the image processing pipeline the information about the shooting conditions and about the tweaks made in a raw converter; and on top of that the different converters may apply numerically equal tweaks of the same parameter (say, saturation) but with very different results to the image.