Recent comments

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Ok, thanks. Indeed, that's probably why I overlooked the dark / cdark business since I was already doing that on my own with the stacked (averaged) dark frames. However, I have yet to obtain somehow a similar white balancing.
On your opinion, would it still make sense to do my own averaged dark image subtraction (and no black / cblack business) and afterwards, apply the white balance coefficients "as shot"? The idea behind this is that subtracting a dark image (average or not) is, since I do the job "per channel", like a (x,y) dependent black and cblack subtraction, isn't it?

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

For subtract_black() call, the 'black level value' and 'black level data' are the same. Just different terms for the same thing to maintain 'writing style' (by not repeating same words many times).

In my pseudo code, you need to do this way:

LibRaw imgp; // Image processor class;
imgp.open_file(...);
imgp.unpack();
  for(row=0; row < ..height; row++)
    for(col=0; col < .. width; col++)
    {
        int color_index = imgp.COLOR(row,col);
        int black_level_for_this_color = imgp.imgdata.color.black + imgp.imgdata.color.cblack[color_index];
        int next_pixel_value_adjusted_for_black = VISIBLE_PIXEL(row,col) - black_level_for_this_color;
        if(next_pixel_value_adjusted_for_black < 0)
            next_pixel_value_adjusted_for_black = 0;
    }

VISIBLE_PIXEL is defined in my previous message.

But for astrophotography it is better to average several dark frames (and do not use black/cblack[]) and subtract this value.

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Right, sorry Alex, I forgot those steps that you explained them in the documentation of the C++ API.

Nonetheless I don't understand what is "imgdata.colordata.black" , it is defined in the documentation as just "unsigned black", but "unsigned what??" is that another way to say void*? (a pointer to any type?). Or is "black" an actual structure? So, i'm confused with all the "unsigned" not followed by a usual type (like in "unsigned int") but by "black" , "maximum" etc...

I only understand that calling subtract_black() will subtrack black level "values" and black level "data", yet I don't understand the difference between "black level values" and "black level data", as written in http://www.libraw.org/docs/API-CXX-eng.html#subtract_black.
Could you shed some light on this "dark-ness"... ?

Indeed, my app is for astrophotography, we do take so-called stacks of "dark images" and subtract them from the raw images. I wonder if using black, cblack... would be redundant, or even worse, inconsistent with using calibration dark images taken ourselves. Understanding this dark, cblack, levels, data, ... will help to decide what I shall (not) use.

And merry christmas to you!! (it's Dec, 25th...)

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Followup:

General raw processing steps are:
- black level subtraction
- white balance (and, possibly, data scaling: if you use integer processing it is better to use full range)
- demosaic
- color conversion to output color space
- gamma curve.

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

I guess, you've forget to subtract black level?

As a 1st step, use
imgdata.colordata.black - base level
imgdata.colordata.cblack[0..3] - per-channel additions

This should be done before white balance.

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Back to the white balancing:

I multiplied my channels as advised (divided by green multiplier, and not touching the green channel), but it's far from the white balancing that I had with dcraw_process().
After unpack(), besides the demosaicing, is there anything I have to do before/after my white balancing to have something close to dcraw_process()?
'cause If not, then I guess the opencv algorithm for demosaicing is such that the white balancing with the camera white balance multiplier does not apply equally as it did with dcraw_process().

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Indeed, I saw Rawspeed was not maintained as regularly as you maintain Libraw, and that's why i've been reluctant to implement Rawspeed. I can deal with the Libraw without Rawspeed, it's good enough.

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Yes, RawSpeed is faster for huffman-compressed formats (Canon CR2, Nikon NEF, some others).

Please note: LibRaw is tested with RawSpeed's 'master' branch (https://github.com/klauspost/rawspeed/tree/master) which is very old, last updated in May 2014.
Up-to-date 'develop' branch is untested, not sure that LibRaw will work with.

My pseudo-code above is correct if you use sizes.raw_pitch

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Thanks for the clarification. My bottleneck is not there for now (I don't know anything about SSE or AVX...). I multiply the array by the r and b multiplier with openCL with openCV.
Regarding Rawspeed, would that change unpack() speed? At the moment, dealing with >20MP images, It takes >1s to unpack. But < 0.2s to demosaic with openCV + white balance. Ideally I'd like to make unpack() faster, < 1s.
Can I make unpack() faster with Rawspeed? How would Rawspeed affect your above pseudo-codes for accessing the visible pixel?

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Yes, usually green multiplier is set to 1.0 (green is most strong channel unless very warm light is used), so you may scale only red/blue channels.

BTW, if SSE or AVX (vectorized) instructions are used to apply WB, it is cheaper to multiply green to 1.0 than split-multiply-join the pixel data.

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

So, here, if I choose to divide by, say, 1024 (green multiplier), then I do not multiply the green channel at all, and multiply the other by their multiplier value divided by 1024, is that how it goes?

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

These values are read from camera metadata and not altered.
Just normalize it: divide to smallest non-zero (or to green multiplier for RGB cameras).

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Ok, in imgdata.color.cam_mul , I read all 4 values above a thousand: 1862, 1024, 1833, 1024
I'm confused with the scaling. Shouldn't I normalize this somehow? I'm used to values of 1.xxx, not values of the order 10^3. What am I missing?

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

These coefficients are stored in imgdata.color.cam_mul[4]

Depending of camera used, 4th coefficient ('second green') may be
- zero (use cam_mul[2] for second green or assume two greens in 2x2 matrix are same)
- same as cam_mul[2] (OK)
- different from cam_mul[2] (really 4 color camera, like old Olympus E-5xx series or Sony F-828, or CMYG camera like Nikon E5700 or Canon G1)

For 4-color cameras you need to use demosaic method suited for this case (4 color in 2x2 matrix, not two similar G and one R and one B).

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

It's working, thank you.

After this I have yet to apply the white balancing. Before I did the above, when using dcraw_process, I was satisfied with the white balancing when using:
 rawProcess.imgdata.params.use_camera_wb = 1;

How may I get the exact same coefficients to apply them to my demosaiced RGB channels?

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

Each row contains not visible pixels on both ends:
- on left (0.... left_margin-1)
- and on right (from visible area end to row end)
Also, rows may be aligned for efficient SSE/AVX access. LibRaw internal code do not align rows, but if you use LibRaw+RawSpeed, RawSpeed will align rows on 16 bytes.
So, it is better to use imgdata.sizes.raw_pitch (it is in bytes, so divide /2 for bayer data) instead of raw_width.

So, right (pseudo)code is something like this:

#define VISIBLE_PIXEL(row,col)  \
imgdata.rawdata.raw_image[(row+imgdata.sizes.top_margin)*imgdata.sizes.raw_pitch/2 + imgdata.sizes.left_margin+col]
 
for(int r = 0; r < imgdata.sizes.height; r++)
 for(int c=0; c<imgdata.sizes.width; r++)
  next_pixel = VISIBLE_PIXEL(r,c);

Add your data object name before imgdata to get correct code

Reply to: .rawdata.raw_image and .image after unpack()   8 years 4 months ago

After unpack(), I'm trying to use the raw_image as an input to opencv demosaicing (with cvtColor and type CV_BayerGB2RGB), and thus avoid the use of dcraw_process for demosaicing. opencv here requires a 1-channel data image. Yet when I use the raw data after unpac(), starting from the first visible pixel, it gives me some rubbish image.
As far as you know at least on the Libraw's side, am I getting the raw CFA data wrong? See below:

int raw_width = (int) rawProcess.imgdata.sizes.raw_width;
int top_margin = (int) rawProcess.imgdata.sizes.top_margin;
int left_margin = (int) rawProcess.imgdata.sizes.left_margin;
int first_visible_pixel = (int) (raw_width * top_margin + left_margin);
 
cv::Mat imRaw(naxis2, naxis1, CV_16UC1);
    ushort *rawP = imRaw.ptr<ushort>(0);
    for (int i = 0; i < nPixels; i++)
    {
        rawP[i] = (ushort) rawProcess.imgdata.rawdata.raw_image[i+first_visible_pixel];
    }
 
    cv::Mat demosaiced16;
    cvtColor(imRaw, demosaiced16, CV_BayerGB2RGB);

Above, imRaw.ptr is the pointer to the data buffer in the cv::Mat object where I want to copy my raw_image data.

The expected image (which I get after dcraw_process) is:
https://www.dropbox.com/s/unn695en6hpdr3j/dcraw_process.jpg?dl=0

And instead, I have this:
https://www.dropbox.com/s/c1f8s3fjgqy0tit/Failed_demosaic.jpg?dl=0

Reply to: LibRaw image size data vs Adobe interpretation   8 years 4 months ago

I know they can be vendor specific but I wondered if you would take patches for decoding such information?

For example to get access to the Canon data, parse_makernote() would need code around line 8686 something like:

else if (tag == 0x0098) // CropInfo
{
unsigned short CropLeft = get2();
unsigned short CropRight = get2();
unsigned short CropTop = get2();
unsigned short CropBottom = get2();
fprintf (stderr, " Cropinfo %d %d %d %d\n", CropLeft, CropRight, CropTop, CropBottom);
}
else if (tag == 0x009a) // AspectInfo
{
unsigned int ratio = get4();
unsigned int CroppedWidth = get4();
unsigned int CroppedHeight = get4();
unsigned int CropLeft = get4();
unsigned int CropTop = get4();
fprintf (stderr, " AspectInfo %d %d %d %d %d\n", ratio, CroppedWidth, CroppedHeight,
CropLeft, CropTop);
}
}

Obviously this would need storing somewhere useful rather than simply printing it out, but that could then be passed back to the real crop functionality

Kevin

http://cpansearch.perl.org/src/EXIFTOOL/Image-ExifTool-9.90/html/TagName...

Reply to: Barrel distortion   8 years 4 months ago

thanks

Reply to: Barrel distortion   8 years 4 months ago

1st google result for lensfun is http://lensfun.sourceforge.net/

Reply to: Barrel distortion   8 years 4 months ago

Sorry to say that I am new in this field.
what is this lensfun?

Reply to: Barrel distortion   8 years 4 months ago

No.

Use lensfun.

Reply to: LibRaw on Visual Studio with Target Machine x64   8 years 5 months ago

Sorry, I know nothing about Linux shared libraries creation.

./configure should work, meanwhile.

Reply to: LibRaw on Visual Studio with Target Machine x64   8 years 5 months ago

It's a huge makefile, any tip on what to change there?

Reply to: LibRaw on Visual Studio with Target Machine x64   8 years 5 months ago

yes, you need to change makefile, or use ./configure stuff.

Pages