.rawdata.raw_image and .image after unpack()

Hello,

After reading and trying out Libraw for a few days, I am confused with the structure of imgdata and what unpack() does to it:

When calling unpack(), the documentation at http://www.libraw.org/docs/API-CXX-eng.html#unpack
says:

Unpacks the RAW files of the image, calculates the black level (not for all formats). The results are placed in imgdata.image.

Yet there is also imgdata.rawdata.raw_image which gives me access to the raw unprocessed image.

What is the difference between imgdata.image and imgdata.rawdata.raw_image right after calling unpack()?
Is imgdata.image affected by some imgdata.params fields and imgdata.rawdata.raw_image is not (by definition, it would stay "raw" indeed)?

Should I also understand that unpack() also fill in the imgdata.rawdata.raw_image (and not only imgdata.image)? The latter appeared empty of any value if I print out values before unpack().

Thank you for this clarification. And thank you also for all your efforts on Libraw.

Raphael

Forums: 

It's working, thank you.

It's working, thank you.

After this I have yet to apply the white balancing. Before I did the above, when using dcraw_process, I was satisfied with the white balancing when using:
 rawProcess.imgdata.params.use_camera_wb = 1;

How may I get the exact same coefficients to apply them to my demosaiced RGB channels?

These coefficients are stored

These coefficients are stored in imgdata.color.cam_mul[4]

Depending of camera used, 4th coefficient ('second green') may be
- zero (use cam_mul[2] for second green or assume two greens in 2x2 matrix are same)
- same as cam_mul[2] (OK)
- different from cam_mul[2] (really 4 color camera, like old Olympus E-5xx series or Sony F-828, or CMYG camera like Nikon E5700 or Canon G1)

For 4-color cameras you need to use demosaic method suited for this case (4 color in 2x2 matrix, not two similar G and one R and one B).

-- Alex Tutubalin @LibRaw LLC

Ok, in imgdata.color.cam_mul

Ok, in imgdata.color.cam_mul , I read all 4 values above a thousand: 1862, 1024, 1833, 1024
I'm confused with the scaling. Shouldn't I normalize this somehow? I'm used to values of 1.xxx, not values of the order 10^3. What am I missing?

These values are read from

These values are read from camera metadata and not altered.
Just normalize it: divide to smallest non-zero (or to green multiplier for RGB cameras).

-- Alex Tutubalin @LibRaw LLC

So, here, if I choose to

So, here, if I choose to divide by, say, 1024 (green multiplier), then I do not multiply the green channel at all, and multiply the other by their multiplier value divided by 1024, is that how it goes?

Yes, usually green multiplier

Yes, usually green multiplier is set to 1.0 (green is most strong channel unless very warm light is used), so you may scale only red/blue channels.

BTW, if SSE or AVX (vectorized) instructions are used to apply WB, it is cheaper to multiply green to 1.0 than split-multiply-join the pixel data.

-- Alex Tutubalin @LibRaw LLC

Thanks for the clarification.

Thanks for the clarification. My bottleneck is not there for now (I don't know anything about SSE or AVX...). I multiply the array by the r and b multiplier with openCL with openCV.
Regarding Rawspeed, would that change unpack() speed? At the moment, dealing with >20MP images, It takes >1s to unpack. But < 0.2s to demosaic with openCV + white balance. Ideally I'd like to make unpack() faster, < 1s.
Can I make unpack() faster with Rawspeed? How would Rawspeed affect your above pseudo-codes for accessing the visible pixel?

Yes, RawSpeed is faster for

Yes, RawSpeed is faster for huffman-compressed formats (Canon CR2, Nikon NEF, some others).

Please note: LibRaw is tested with RawSpeed's 'master' branch (https://github.com/klauspost/rawspeed/tree/master) which is very old, last updated in May 2014.
Up-to-date 'develop' branch is untested, not sure that LibRaw will work with.

My pseudo-code above is correct if you use sizes.raw_pitch

-- Alex Tutubalin @LibRaw LLC

Indeed, I saw Rawspeed was

Indeed, I saw Rawspeed was not maintained as regularly as you maintain Libraw, and that's why i've been reluctant to implement Rawspeed. I can deal with the Libraw without Rawspeed, it's good enough.

Back to the white balancing:

Back to the white balancing:

I multiplied my channels as advised (divided by green multiplier, and not touching the green channel), but it's far from the white balancing that I had with dcraw_process().
After unpack(), besides the demosaicing, is there anything I have to do before/after my white balancing to have something close to dcraw_process()?
'cause If not, then I guess the opencv algorithm for demosaicing is such that the white balancing with the camera white balance multiplier does not apply equally as it did with dcraw_process().

I guess, you've forget to

I guess, you've forget to subtract black level?

As a 1st step, use
imgdata.colordata.black - base level
imgdata.colordata.cblack[0..3] - per-channel additions

This should be done before white balance.

-- Alex Tutubalin @LibRaw LLC

Followup:

Followup:

General raw processing steps are:
- black level subtraction
- white balance (and, possibly, data scaling: if you use integer processing it is better to use full range)
- demosaic
- color conversion to output color space
- gamma curve.

-- Alex Tutubalin @LibRaw LLC

Right, sorry Alex, I forgot

Right, sorry Alex, I forgot those steps that you explained them in the documentation of the C++ API.

Nonetheless I don't understand what is "imgdata.colordata.black" , it is defined in the documentation as just "unsigned black", but "unsigned what??" is that another way to say void*? (a pointer to any type?). Or is "black" an actual structure? So, i'm confused with all the "unsigned" not followed by a usual type (like in "unsigned int") but by "black" , "maximum" etc...

I only understand that calling subtract_black() will subtrack black level "values" and black level "data", yet I don't understand the difference between "black level values" and "black level data", as written in http://www.libraw.org/docs/API-CXX-eng.html#subtract_black.
Could you shed some light on this "dark-ness"... ?

Indeed, my app is for astrophotography, we do take so-called stacks of "dark images" and subtract them from the raw images. I wonder if using black, cblack... would be redundant, or even worse, inconsistent with using calibration dark images taken ourselves. Understanding this dark, cblack, levels, data, ... will help to decide what I shall (not) use.

And merry christmas to you!! (it's Dec, 25th...)

For subtract_black() call,

For subtract_black() call, the 'black level value' and 'black level data' are the same. Just different terms for the same thing to maintain 'writing style' (by not repeating same words many times).

In my pseudo code, you need to do this way:

LibRaw imgp; // Image processor class;
imgp.open_file(...);
imgp.unpack();
  for(row=0; row < ..height; row++)
    for(col=0; col < .. width; col++)
    {
        int color_index = imgp.COLOR(row,col);
        int black_level_for_this_color = imgp.imgdata.color.black + imgp.imgdata.color.cblack[color_index];
        int next_pixel_value_adjusted_for_black = VISIBLE_PIXEL(row,col) - black_level_for_this_color;
        if(next_pixel_value_adjusted_for_black < 0)
            next_pixel_value_adjusted_for_black = 0;
    }

VISIBLE_PIXEL is defined in my previous message.

But for astrophotography it is better to average several dark frames (and do not use black/cblack[]) and subtract this value.

-- Alex Tutubalin @LibRaw LLC

Ok, thanks. Indeed, that's

Ok, thanks. Indeed, that's probably why I overlooked the dark / cdark business since I was already doing that on my own with the stacked (averaged) dark frames. However, I have yet to obtain somehow a similar white balancing.
On your opinion, would it still make sense to do my own averaged dark image subtraction (and no black / cblack business) and afterwards, apply the white balance coefficients "as shot"? The idea behind this is that subtracting a dark image (average or not) is, since I do the job "per channel", like a (x,y) dependent black and cblack subtraction, isn't it?

White balance should be

White balance should be applied after black level subtraction, indeed.

The difference between in-camera stored black level data and averaged black frame will affect only darkest parts of image, but not overall look.

-- Alex Tutubalin @LibRaw LLC

Well, in deep sky imaging we

Well, in deep sky imaging we always deal with faint signal, we have rather dark pixels all over the place, therefore different "dark" (levels or averaged frame) subtraction methods really makes a different overall look, it's a critical step for a good final result. I'll give it a try with white balancing after dark frame subtraction.

Yes, I use averaged dark

Yes, I use averaged dark frame subtraction for night shots (not deep sky, but landscape lit by moon or milky way). It helps a lot with banding and other pattern noise.

Anyway, 'standard' (black+cblack) and averaged values are close enough to not change white balance much.

-- Alex Tutubalin @LibRaw LLC

I'm puzzled again. When

I'm puzzled again. When trying unpack() followed by subtrack_black() or not,
It makes no difference whether I set '''rawProcess.imgdata.params.use_camera_wb ''' to 0 or to 1 (i set that before unpack()), does this affect the raw_image data i'm using so far for my own processing or does this only affect imgdata.image after a call to raw2image() and/or dcraw_process? I only notice differences if i use imgdata.
The documentation was saying it affects unpack(), hence my asking.

subtract_black() works with

subtract_black() works with imgdata.image

rawdata.raw_image is unaffected (to allow several processing /renderings/ of same raw data with different parameters)

-- Alex Tutubalin @LibRaw LLC

Pages