Random And Groundless Thoughts On Color Control In a Raw Convertor


After finally finishing reading Fairchild s Color Appearance Models I started to get deep into thought and some empirical things became clear.

Color Contrast

In the photographical community it is pretty much a common place that if you show the viewer two pictures, one with normal colors and one with an increased saturation, the viewer will in most of the cases pick the one with the higher saturation (if it is, for example a landscape scene) as the more natural one (of course if saturation is increased in the reasonable limits).

I could not quickly find the wording of this effect in books by Margulis, although I was almost certain that it was there in one way or another: in his book Photoshop Lab Color) this rule (increasing the a-b axis contrast) is used starting with the very first example.

In the earlier years, I thought the above-mentioned anecdote to be a sort of a mnemonic rule, but it turned out to be the Hunt Effect, which was quantitatively studied more than 50 years ago. The main essence of the effect is that as you increase the luminance, the perceived color saturation increases too. Therefore, since we are studying photos under a much dimmer light than the one present in the original scene (at least for landscape shots during the daylight), this difference needs to be compensated.

Brightness Contrast

It is the same case with the brightness contrast: for the majority of street scenes, there is an empirical desire to increase the brightness. This effect was measured by Stevens and Stevens 45 years ago, and the results can be quantitatively used. The reason why you need to increase the contrast of the printouts is the same: you look at them at a much dimmer and duller light than the one that was present during the shooting and presumed by the viewer.

Toying With The Surroundings

Other effects that were only known to me empirically turned out to have been measured. For example the dependence of the visual contrast on the surroundings (the Bartleson-Breneman effect, quantitatively studied in 1967-75): if we study an image on a dark background then the visual contrast decreases. Thus, slides with an extremely high (relative to the original scene) contrast appear normal on the screen because we observe them in a darkened room.

Numerical Description

The majority of the above-mentioned effects can be replicated well in some modern models of perception, for example in CIECAM02. Obviously such models have more input parameters than those conventional XYZ and Lab representations have. Besides relative color coordinates of colors (as well as the parameters of the illuminants - if we compute the adaptation), they require the characteristics of the surroundings and data about the real brightness in the scene. Usually such data simply does not exist, and if it does exist, it is not registered in the metadata of the image.

What Can Happen In Real Life: CIECAM02 In A Raw Converter

In real life, the brightness of the initial scene can be extracted with the necessary precision from EXIF data. We know the exposure, the ISO setting, and thus we can reconstruct the brightness of each point in the picture. Of course it only works in a simple scenario. If filters and flashes were used you can be off. But in many cases such a calculation will still be acceptably correct.

Based on the initial brightness of the scene, and the standard viewing conditions as recommended by the ISO standard 3664:2000, the convertor can suggest starting positions of the contrast and saturation sliders, or the starting curves for the same parameters.

In a number of cases already on the raw conversion stage we know what will be the viewing conditions of the result such as a print, screen projection, or monitor viewing. Therefore we will know how to tweak the contrast, and the black point for the printout. It is very sensible to have such an option in the converter even if as the starting point.

Postponing these tweaks till the further steps of the processing pipeline, until after the raw conversion is accomplished for example, seems dubious since we will be essentially counteracting some of the tweaks that were already applied at raw conversion stage. Another issue is that currently there is no standard agreement or mechanism to pass down the image processing pipeline the information about the shooting conditions and about the tweaks made in a raw converter; and on top of that the different converters may apply numerically equal tweaks of the same parameter (say, saturation) but with very different results to the image.

Comments

Interesting reading. One

Interesting reading.

One thing that I would point out is that human vision is a tricky beast and it may be difficult to draw the right conclusions out of empirical results. There are many, many things going on at once... we can get a sense of that from the wide variety of visual illusions out there.

I've found myself that a lot of the theories out there are wrong... e.g.
http://colorcorrection.info/color-science/hermann-grid-illusion-nobody-k...

As far as building a RAW converter goes, some of the color models like LAB might not necessarily be what photographers want.
http://colorcorrection.info/color-science/is-lab-useful-for-color-correc...

---
In the grand scheme of things, I think that the psychology of the end user also plays a huge role. e.g. in premium audio cable, there are 'snake oil' products out there where people buy them because their expectations about the price (expensive=good, etc.) trump their actual perception!!
When looking at a photograph, there are noticeable differences between the reproduced image and real life. e.g. not 3-D, dynamic range cannot reproduce specular highlights, artifacts, etc. etc. We can tell whether we just looked at a photograph or through a window. But for the most part, people forgive that and don't pay attention to the technical flaws/shortcomings.

I think if photographers want to make truly ""realistic"" photos, there is art involved in the photographer tweaking the controls manually to look right. Part of this is to capture the 'signal processing' done by our brain. And sometimes part of it is to aim for naturalism... where the image looks like what it should look like.
To clarify what I mean by naturalism... in audio, we expect gunshots to sound like what they sound like in the movies (they have a certain roar from the compression effects applied to them). Yet actual recordings of gunshots sound very different.

Glenn, Lab is definitely not

Glenn,

Lab is definitely not the best choice. On the other side, we need some widely recognized color space for image storage/exchange. From this point, Lab is better choice (RGB and CMYK are imperfect, other color spaces are uncommon). L axis of Lab is good enough.

AFAIK, there is no 'digitizable' perceptually uniform color spaces. Munsell scale is good, but there is no easy way to calculate (perceptually) intermediate color values.

-- Alex Tutubalin

Dear Glenn, I'm afraid Lab

Dear Glenn, I'm afraid Lab was not designed to be perceptually uniform, but to be deltaE uniform.

--
Iliah Borg

erreauk.com

It is the same case with the brightness contrast: for the majority of street scenes, there is an empirical desire to increase the brightness. This effect was measured by Stevens and Stevens 45 years ago, and the results can be quantitatively used. The reason why you need to increase the contrast of the printouts is the same: you look at them at a much dimmer and duller light than the one that was present during the shooting and presumed by the viewer.

Sorry, link to warez site deleted by LibRaw admin

CIECAM02

Hi

I successfully applied CIECAM02 to a RAW imaging workflow. The main problem was to identify the adopted white from scene colorimerty as an input to CIECAM02. I ended up using the cameras white balance for ease in order to complete the experiment. The scene illumination was calculated from ISO and Exposure and gave favorable results. The main part of the process was to characterise the camera using a double monochromator and calculate the cameras spectral response. This was then used to determine the device RGB to XYZ matrix by reducing errors in JMH space for different values of La and white point.

The resulting matrices were used to transform RAW RGB to XYZ according to scene white and La and then passed through CIECAM02.
The internal coordinates used were JMH and output to standard sRGB viewing conditions through the reverse CIECAM02.

I got some pretty good results, (although maybe not appearance perfect) compared to the sRGB JPEG produced by the Nikon D70s.

DCRAW and Matlab used for computation.

Looks very promising. Is

Looks very promising.

Is there some code (plus profile) to play with?

-- Alex Tutubalin

ciecam02

http://www.digitalcolour.org/toolbox.htm

Dr Green has matlab code for CIECAM02.

As for profiles. There is not a profile as such. You need to determine your own cameras RGB to XYZ response. If you have a D70s I could send you some matrixes that I made earlier. I am in the process of publishing a paper on the subject. Ill keep you posted.

Add new comment