Sorry, docs slightly outdated in this particular place, to be fixed ASAP.
unpack() in current LibRaw (0.16+) stores raw data in imgdata.rawdata.raw_image (or color3_image, or color4_image). This is one component per pixel for raw_image (bayer)
raw2image() pupulates imgdata.rawdata into imgdata.image[][4] array (4 components per pixel, but only one filled with value for bayer images).
image[][4] than used to all postprocessing by dcraw_process()
this is because of modification made on 0.16 version. Prior to it, unpack() works with imgdata.image[] directly, so
1) multiple processing (dcraw_process()) of same raw data with different settings was impossible.
2) it application uses only raw data and do not need dcraw_process() (so, do own processint), image[] is 4x waste of memory
Please pay attention to the documentation we supply. A lot of effort went into it, and I do not see any reason to continue selected quoting from there.
Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
There are a several variables/calls that delivers specific information:
1) LibRaw::imgdata.idata.dng_version - 0 for not-DNG, non-zero for DNG files
2) LibRaw::get_decoder_info() call
3) four calls:
int is_fuji_rotated();
int is_sraw();
int is_nikon_sraw();
int is_coolscan_nef();
and protected
virtual int is_phaseone_compressed();
Will return non-zero for some specific cases that might be interesting in processing workflow
Yes, in bayer cameras each pixel is monochrome (R, G, or B). Also, in 2x2 bayer pattern (two green pixels, one R and B), two greens are usually on different ADC channels, so it is safer to treat them as separate channels (G and G2) with different black levels.
Sorry, docs slightly outdated in this particular place, to be fixed ASAP.
unpack() in current LibRaw (0.16+) stores raw data in imgdata.rawdata.raw_image (or color3_image, or color4_image). This is one component per pixel for raw_image (bayer)
raw2image() pupulates imgdata.rawdata into imgdata.image[][4] array (4 components per pixel, but only one filled with value for bayer images).
image[][4] than used to all postprocessing by dcraw_process()
this is because of modification made on 0.16 version. Prior to it, unpack() works with imgdata.image[] directly, so
1) multiple processing (dcraw_process()) of same raw data with different settings was impossible.
2) it application uses only raw data and do not need dcraw_process() (so, do own processint), image[] is 4x waste of memory
Sorry...
http://www.libraw.org/docs/API-datastruct-eng.html#libraw_image_sizes_t
flip field.
Please pay attention to the documentation we supply. A lot of effort went into it, and I do not see any reason to continue selected quoting from there.
That explains all... ;)
I use the image[] array directly: is there a way to know, using LibRaw, if I should rotate it?
Checked with raw from A7R-II review: http://www.photographyblog.com/reviews/sony_a7r_ii_review/sample_images/ (raw #95)
dcraw_emu (without any parameters) rotates the image vertical and in correct angle.
Update:
LibRaw/dcraw rotate image on output, not the image[] array.
Thanks... That's very nice to have your help... :)
user_mul is used only in dcraw_process().
For other parameters that may affect unpack() please read API notes: http://www.libraw.org/docs/API-notes-eng.html#imgdata_params
Thank you! I had missed that one!
Is user_mul being used in unpack() or just in dcraw_process()?
In other terms, do I have to:
change user_mul
unpack()
dcraw_process()
or is it enougth to:
change user_mul
dcraw_process()?
Sylvain.
use imgdata.params.user_mul if you need to set manual white balance.
Well, I've done tests about this, which didn't work.
Here is my workflow:
open_file
set pre_mul
unpack
dcraw_process(1)
change pre_mul
dcraw process(2)
Change pre_mul gets ignored.
I've tried with adding unpack just before dcraw_process(2), but change pre_mul is still ignored.
That's no big deal for me, I simply recycle() and (re)opent(). I just wanted to keep you posted... ;)
Sylvain.
Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
unpack() do not fill/allocate image array
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
You can call dcraw_process() multiple times, no additional data move needed.
So that's how dcraw_process() works in that case: clipping everybody to 40000, and stretching that from 40000 to 65535?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
Thank you! :)
http://www.libraw.org/docs/API-datastruct-eng.html#libraw_output_params_t
look for user_qual parameter in settings.
Default is 3 (AHD)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
Thanks... :)
Sylvain.
Thanks Alex!
imgdata.idata.cdesc contatins color description string (RGBG or CMYG)
Use dcraw-like code to output it, substitute fcol() with COLOR()
Very helpful, thank you!
There are a several variables/calls that delivers specific information:
1) LibRaw::imgdata.idata.dng_version - 0 for not-DNG, non-zero for DNG files
2) LibRaw::get_decoder_info() call
3) four calls:
int is_fuji_rotated();
int is_sraw();
int is_nikon_sraw();
int is_coolscan_nef();
and protected
virtual int is_phaseone_compressed();
Will return non-zero for some specific cases that might be interesting in processing workflow
Thank you.
There seem to be quite a few tricks behin all that. So I think I'll stick with using dcraw processing first... ;)
Yes, in bayer cameras each pixel is monochrome (R, G, or B). Also, in 2x2 bayer pattern (two green pixels, one R and B), two greens are usually on different ADC channels, so it is safer to treat them as separate channels (G and G2) with different black levels.
Pages