Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
There are a several variables/calls that delivers specific information:
1) LibRaw::imgdata.idata.dng_version - 0 for not-DNG, non-zero for DNG files
2) LibRaw::get_decoder_info() call
3) four calls:
int is_fuji_rotated();
int is_sraw();
int is_nikon_sraw();
int is_coolscan_nef();
and protected
virtual int is_phaseone_compressed();
Will return non-zero for some specific cases that might be interesting in processing workflow
Yes, in bayer cameras each pixel is monochrome (R, G, or B). Also, in 2x2 bayer pattern (two green pixels, one R and B), two greens are usually on different ADC channels, so it is safer to treat them as separate channels (G and G2) with different black levels.
Thanks again, and sorry about the false assumption that dcraw_process() was 8 bits...
I noticed the "only one of rgbg is not null for each pixel" thing... Why is that so?
Could it be that my 16MPixels camera would in fact be a camera with 4M realPixels (16M subPixels, each of them being either r, g or b), and not a 16*4M subPixels camera?
dcraw_process() is 16 bit internally. Processing result is scaled to 8 bit (if asked) on output routines.
raw2image converts from bayer data (flat, 1 value per pixel) to 4-component image[][4]. Only one component of image is non-zero after raw2image. This is NOT demosaiced image, but data prepared for demosaic (this format is back-compatible to old LibRaw versions, that's why raw2image call is provided).
To get color index (index of non-zero element in 4-component pixel) use COLOR(row,col) call.
With default settings, LibRaw processing:
1) Adjust data maximum using real maximum in RAW file (use params.adjust_maximum_thr to adjust)
2) Brighten image to put 1% in saturation (parms.auto_bright_thr to adjust, or turn brightening off via params.no_auto_bright)
Darktable uses own raw processing with all things different.
If you have two PNG converters, that take a BMP and turn it into PNG form and found that the output from these converters were different, what would you do? Note that PNG is simply using lossless compression on a BMP any differences means someone is at fault!
Why does this change for RAW converters? Aren't RAW converters "suppose" to be "lossless" or are they horrifically lossy programs that have been touted about as needed by people who have a vested interest in keeping the status quo?
OpenRAW makes far more sense. Proprietary formats don't matter so much as holding a knife to Adobe's throat and saying "we're documenting our RAWs now, so update your code for all platforms because you only exist because of our customers." Companies working together for the betterment of the consumer rather than whatever they think they're doing, should be the real goal here.
Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
unpack() do not fill/allocate image array
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
You can call dcraw_process() multiple times, no additional data move needed.
So that's how dcraw_process() works in that case: clipping everybody to 40000, and stretching that from 40000 to 65535?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
Thank you! :)
http://www.libraw.org/docs/API-datastruct-eng.html#libraw_output_params_t
look for user_qual parameter in settings.
Default is 3 (AHD)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
Thanks... :)
Sylvain.
Thanks Alex!
imgdata.idata.cdesc contatins color description string (RGBG or CMYG)
Use dcraw-like code to output it, substitute fcol() with COLOR()
Very helpful, thank you!
There are a several variables/calls that delivers specific information:
1) LibRaw::imgdata.idata.dng_version - 0 for not-DNG, non-zero for DNG files
2) LibRaw::get_decoder_info() call
3) four calls:
int is_fuji_rotated();
int is_sraw();
int is_nikon_sraw();
int is_coolscan_nef();
and protected
virtual int is_phaseone_compressed();
Will return non-zero for some specific cases that might be interesting in processing workflow
Thank you.
There seem to be quite a few tricks behin all that. So I think I'll stick with using dcraw processing first... ;)
Yes, in bayer cameras each pixel is monochrome (R, G, or B). Also, in 2x2 bayer pattern (two green pixels, one R and B), two greens are usually on different ADC channels, so it is safer to treat them as separate channels (G and G2) with different black levels.
Thanks again, and sorry about the false assumption that dcraw_process() was 8 bits...
I noticed the "only one of rgbg is not null for each pixel" thing... Why is that so?
Could it be that my 16MPixels camera would in fact be a camera with 4M realPixels (16M subPixels, each of them being either r, g or b), and not a 16*4M subPixels camera?
dcraw_process() is 16 bit internally. Processing result is scaled to 8 bit (if asked) on output routines.
raw2image converts from bayer data (flat, 1 value per pixel) to 4-component image[][4]. Only one component of image is non-zero after raw2image. This is NOT demosaiced image, but data prepared for demosaic (this format is back-compatible to old LibRaw versions, that's why raw2image call is provided).
To get color index (index of non-zero element in 4-component pixel) use COLOR(row,col) call.
Yes.
To get behaviour very similar to dcraw (including 'pink clouds' problem) set params.adjust_maximum_thr to 0.0f;
To change auto-brigtening use auto_bright_thr or manual brightness adjustment.
Do you say that I can somehow manually change the way LibRaw behaves?
Thanks.
LibRaw 0.17 works fine with these files, so switch to actual release
(I'll not inspect LibRaw 0.16 for this problem, because 0.16 branch will receive only critical security bugfixes).
https://yadi.sk/d/DQkLznQcj9JeT
https://yadi.sk/d/FIj0e9Uhj9JdL
With default settings, LibRaw processing:
1) Adjust data maximum using real maximum in RAW file (use params.adjust_maximum_thr to adjust)
2) Brighten image to put 1% in saturation (parms.auto_bright_thr to adjust, or turn brightening off via params.no_auto_bright)
Darktable uses own raw processing with all things different.
egor@shutter:~/src/libraw/LibRaw-0.16.2/bin$ ./simple_dcraw -v -4 -T ~/tmp/raw/IMGP3859.PEF
Processing file /home/egor/tmp/raw/IMGP3859.PEF
/home/egor/tmp/raw/IMGP3859.PEF: data corrupted at 160143
Cannot unpack /home/egor/tmp/raw/IMGP3859.PEF: Input/output error
egor@shutter:~/src/libraw/LibRaw-0.16.2/bin$ cd ../../LibRaw-0.17.0/bin/
egor@shutter:~/src/libraw/LibRaw-0.17.0/bin$ ./simple_dcraw -v -4 -T ~/tmp/raw/IMGP3859.PEF
Processing file /home/egor/tmp/raw/IMGP3859.PEF
Writing file /home/egor/tmp/raw/IMGP3859.PEF.tiff
egor@shutter:~/src/libraw/LibRaw-0.17.0/bin$
Could you please upload sample file somewhere (dropbox, etc) and share the link.
If you have two PNG converters, that take a BMP and turn it into PNG form and found that the output from these converters were different, what would you do? Note that PNG is simply using lossless compression on a BMP any differences means someone is at fault!
Why does this change for RAW converters? Aren't RAW converters "suppose" to be "lossless" or are they horrifically lossy programs that have been touted about as needed by people who have a vested interest in keeping the status quo?
OpenRAW makes far more sense. Proprietary formats don't matter so much as holding a knife to Adobe's throat and saying "we're documenting our RAWs now, so update your code for all platforms because you only exist because of our customers." Companies working together for the betterment of the consumer rather than whatever they think they're doing, should be the real goal here.
Pages