Add new comment

Order of operation and correct level-adjustment

Hi,

I am trying to reproduce (in my own code, due to be converted into a CUDA tool) libraw's pre-processing and post-processing steps. So far I have a working "pipeline" but I need some help in understand a few steps.

What I do:

- load the image by "open_file(char*)"
- unpack the file content by "unpack()"
- call subtract_black (but it seems like black levels aren't set in my images, all imgdata.rawdata.color.cblack[x] entries are 0), min/max values before and after "subtract_black()" are identical
- safely multiply imgdata.rawdata.raw_image by 65535/imgdata.color.maximum ("safely" because I clip data higher than imgdata.color.maximum to the maximum value)
- divide imgdata.color.cam_mul[0-3] by its smallest component and use the resulting color multiplier to ...
- scale and clip the raw data at imgdata.rawdata.raw_image
- do my GPU demosaic
- resulting RGB image is supposed to be in "camera color space" now (looks too dark and colors are off, see below)
- apply imgdata.color.rgb_cam[][] matrix to RGB image (expecting sRGB result). This step completely fails, the resulting image is clipping and running into "weird color"

Notes:

In comment https://www.libraw.org/comment/4277#comment-4277 Alex says that rgb_cam is "from RAW to sRGB". I am not sure that this is what he meant, because he uses "RAW" in the same comment for the RAW, mosaic data. I assume that what he meant is "camera colorspace" or "raw color space" or whatever term works best for "unaltered color space from camera".

In comment https://www.libraw.org/comment/4519#comment-4519 Alex says that "data scaling to full range" (may I call this "normalization"?) is to be done AFTER demosaic. Is that correct, is the normalization step ("scale to full range") to be done after demosaic (i.e. in RGB colorspace)?

My questions are:

- should normalization be done before demosaic or after? If I follow above comment (scale-to-full-range after demosaic) I get "wrong" color balancing, but if I do the normalization step after my "brute force" multiply-by-maximum (see above), I get a plausible color balance (yet too dark)
- I am pretty sure that using imgdata.color.maximum is bad, since my color balance is slightly "off" compared to a standard libraw-run on the same data using gamma 1.0/1.0 (linear) and output colorspace 0 ("raw" or "unaltered" colorspace). What would be the better scale-to-max approach?
- I am unable to get the matrix multiplication to output correct color. This clearly is a problem in my workflow (see above). Since the fourth column in the matrix is 0, I don't think it has to be applied to the RAW data as Alex' comment seems to indicate, as that would set to 0 the second green pixel, effectively losing resolution. So it must be applied to RGB data after demosaic, but here I get clipping. Could you please give me a bit of pseudo-code to check against mine (see below)? Or point out the error I made (above, missing a step in the workflow?)

Thank you so much in advance!

Pseudo-code for applying the rgb_cam matrix:

for x/y in RGB-image: // loop over all pixels in RGB image
double rgbM[3]; // 3-component-vector for RGB values, "single column matrix"
rgbM[0]=rgbM[1]=rgbM[2]=0; // clear single column matrix
for(int c=0;c <3; c++) // Matrix multiplication (based on libraw code)
{
rgbM[0]+=imgdata.color.rgb_cam[0]

*RGB_channel[c];
    rgbM[1]+=imgdata.color.rgb_cam[1][c]*RGB_channel[c];
    rgbM[2]+=imgdata.color.rgb_cam[2][c]*RGB_channel[c];
    }
  for(int c=0;c<3;c++) if(rgbM[c]>65535) rgbM[c]=65535; // clipping
  RGB-image(x,y) = rgbM; // set pixel color code to 3-component vector content
</cite>
 
Thanks again, and apologies for the lengthy message!
 
Mact

Forums: