CFA does not define the response. The product of CFA transmission spectra, silicon response, and in-camera raw data processing does. pinv is a very useful tool if you need just a matrix. or going to overload a matrix with curves. Conversion from camera RGB to XYZ is not useful however. There is no need in intermediate steps.
I think I figured it out: it makes physically more sense to express the XYZ primaries in terms of the primaries of the CFA. The best you can then do in the colors from the CFA to XYZ, is to minimize the error made. This is what is implemented in the pseudoinverse() function, as described here: http://www.sci.utah.edu/~gerig/CS6640-F2012/Materials/pseudoinverse-cis61009sl10.pdf
This is done by minimizing the squared error of an overconditioned system of equations.
I'm not excellent in color theory, but in general:
- camera responds to any visible spectrum signal
- and, also, on UV and IR wavelenghts too.
So, triangle angles (bayer primaries) may reside outside of 'human eye locus' on xy 'visible colors' diagram (to include as much as possible colors into triangle).
The values you calculate from camera color profile data to be derived from profile creator's intention (color gamut, etc), not real 'bayer primaries'.
I know this is old, but 17barski, do you know what specifically you changed or set things to? I'm having the same basic problem where I set dcraw_emu to "Set as Startup Project", compiled everything, saw that dcraw_emu.exe and libraw.dll are in the debug folder, and still get errors saying that the files don't exist. Thanks
Sorry, no MSVC 2015 on hands, just tried with MSVC 2013:
call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x64
nmake -f Makefile.msvc
After that:
mem-image -4 -6 filename.cr2
filename.cr2.ppm (16 bit, linear) was produced, looks OK
what if a manual WB is chosen and a -magenta is entered (i don't think most of calibrated in CC)? will this give a similar result?
for example, Sony will allow such a compensation. i have already, through my raw processing, set this as my standard, as i shot at one temperature with -magenta, and then adjust the proper temperature during raw processing.
At a first glance, presuming the filter blocks only the green channel, this means that the red and the blue channels should be exposed one stop better. However, the blue and red channels are sensitive to a pretty wide range of spectrum. That's why the actual per channel exposures were changed less than that, as it is obvious when looking at the white balance coefficients.
And if data is scaled to 8 bit somewhere, your result looks correct.
coloir.maximum is in RAW data domain, so for 14-bit camera it is about 16000.
imgdata.image[], after processing, is in 0..64k range.
your luminocity calculation, in first assumption, is in same range as imdata.image, so 64k.
So your jpeg generation code will show upper two stops of image as 'overflow'
Hmm, It seems to me that processed data are not being scaled to the 0-65535 range. In fact the data that are not saturated fall into the range 0 - colour.maximum, only saturated values are larger than color.maximum.
Is there a way I can upload an image to the forum? I have generated a greyscale jpeg from the data vector in my code above using
color.maximum is 'maximum data value permitted by data format'.
In some cases, real maximum is smaller.
There is color.data_maximum field, but it is filled /based on real data/ on raw2image (or dcraw_process) stage, so not available just after unpack().
In your example you compare raw data maximum (so, data range of raw data) with processed values.
This is not correct, because output data are scaled to use full data range (16 bit).
Thank-you Alex
I have just loaded up an image and I see the WB_Coeffs array. Elements 1, 3, 4, 10, 11 and 14 are filled, which presumably correspond to the 6 options of my EOS 40D (daylight, cloudy, shade, tungsten, fluorescent, flash). That's everything I needed.
With LibRaw 0.18 (currently in Beta1, Beta2 to be published soon),
in-camera WB multipliers are extracted into
imgdata.color.WB_Coeffs[256][4];
and
imgdata.colr.WBCT_Coeffs[64][5];
This extraction is made for all RAWs with embedded color presets, not only Canons.
1st array (WB_Coeffs) is indexed with EXIF Colorsource, so to get, for example, D65 preset, you need to inspect WB_Coeffs[21] for non-zero values.
2nd array (WBCT_Coeffs) is filled from 0 to 63:
WBCT_Coeffs[i][0] is color temperature (from camera settings)
and
WBCT_Coeffs[i][1..4] are WB coeffs.
There is no automated procedure to copy these values into processing, so one needs to examine array data (or, simply, fill WB drop-down with these values), than copy needed WB Coeffs into
imgdata.params.user_mul[] to use on dcraw_process() stage.
So, processing sequence is
LibRaw::open_file();
LibRaw::unpack();
... examine WB_Coeffs/WBCT_Coeffs
.. copy needed into user_mul
LibRaw::dcraw_process();
(and, sure, you may call dcraw_process() multiple times without unpack() of same file)
2nd:
WB Coeffs to CCT and back is not very complex (look into Adobe DNG SDK sources for sample code), but results of this procedure depends on color profile and camera calibration data used.
So, calculated CCT/Tint will, most likely, not compatible with other programs (or, even, with in-camera WB setting by CCT).
CFA does not define the response. The product of CFA transmission spectra, silicon response, and in-camera raw data processing does. pinv is a very useful tool if you need just a matrix. or going to overload a matrix with curves. Conversion from camera RGB to XYZ is not useful however. There is no need in intermediate steps.
I think I figured it out: it makes physically more sense to express the XYZ primaries in terms of the primaries of the CFA. The best you can then do in the colors from the CFA to XYZ, is to minimize the error made. This is what is implemented in the pseudoinverse() function, as described here:
http://www.sci.utah.edu/~gerig/CS6640-F2012/Materials/pseudoinverse-cis61009sl10.pdf
This is done by minimizing the squared error of an overconditioned system of equations.
I'm not excellent in color theory, but in general:
- camera responds to any visible spectrum signal
- and, also, on UV and IR wavelenghts too.
So, triangle angles (bayer primaries) may reside outside of 'human eye locus' on xy 'visible colors' diagram (to include as much as possible colors into triangle).
The values you calculate from camera color profile data to be derived from profile creator's intention (color gamut, etc), not real 'bayer primaries'.
if you do not want any color profile, you need to set params.output_color=0
With this setting image to be white balanced, demosaiced and scaled to fill 16-bit range.
So calling dcraw_process() with just the settings written above will create a RGB image without any further kind of processing? Thanks!
noise reduction and highlight recovery are not enabled by default
thanks
Here is the sample of AHD maze: https://picturecode.cachefly.net/photoninja/images/demosaic_before.jpg
Your blue dot artifacts are very different from that.
I know this is old, but 17barski, do you know what specifically you changed or set things to? I'm having the same basic problem where I set dcraw_emu to "Set as Startup Project", compiled everything, saw that dcraw_emu.exe and libraw.dll are in the debug folder, and still get errors saying that the files don't exist. Thanks
Found it. Mea culpa.
I've added the path to the old libraw 0.17 include files. Removed it and it now works.
But awsome how fast you did reply! :)
Yeah. I've also did your test (same image), it produces a 16bit ppm.
I guess I need to clean up my code and see where this behaviour comes from.
Sorry, no MSVC 2015 on hands, just tried with MSVC 2013:
call "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" x64
nmake -f Makefile.msvc
After that:
mem-image -4 -6 filename.cr2
filename.cr2.ppm (16 bit, linear) was produced, looks OK
Sure! Thanks
http://www.libraw.org/docs should be updated then to remove the LibRaw license.
Thanks!
WB settings does not change raw data, so you may use any in-camera setting.
what if a manual WB is chosen and a -magenta is entered (i don't think most of calibrated in CC)? will this give a similar result?
for example, Sony will allow such a compensation. i have already, through my raw processing, set this as my standard, as i shot at one temperature with -magenta, and then adjust the proper temperature during raw processing.
At a first glance, presuming the filter blocks only the green channel, this means that the red and the blue channels should be exposed one stop better. However, the blue and red channels are sensitive to a pretty wide range of spectrum. That's why the actual per channel exposures were changed less than that, as it is obvious when looking at the white balance coefficients.
And if data is scaled to 8 bit somewhere, your result looks correct.
coloir.maximum is in RAW data domain, so for 14-bit camera it is about 16000.
imgdata.image[], after processing, is in 0..64k range.
your luminocity calculation, in first assumption, is in same range as imdata.image, so 64k.
So your jpeg generation code will show upper two stops of image as 'overflow'
Not so. It looks like you've divided data by 256 (than recover it back?). This step is missing
In your code:
data should be in 16bit (64k) range.
Next step:
Data is upscaled to 24 bit?
The data array is set in my first example. It is just an approximation of the luminosity of each pixel. (2*red+3*green_blue)/6.
This link should show you the output of the above as jpeg files. You can see the integer overflow in the overexposed clouds https://1drv.ms/f/s!AgLCmsxNxVV_jelOf_MIR7MeGcsXvQ
imgdata.image is indeed scaled in LibRaw::scale_colors() call
For your test: what contained in data array?
Hmm, It seems to me that processed data are not being scaled to the 0-65535 range. In fact the data that are not saturated fall into the range 0 - colour.maximum, only saturated values are larger than color.maximum.
Is there a way I can upload an image to the forum? I have generated a greyscale jpeg from the data vector in my code above using
The ouput shows the buffer overrun from conversion to unsigned char very clearly.
Phil
color.maximum is 'maximum data value permitted by data format'.
In some cases, real maximum is smaller.
There is color.data_maximum field, but it is filled /based on real data/ on raw2image (or dcraw_process) stage, so not available just after unpack().
In your example you compare raw data maximum (so, data range of raw data) with processed values.
This is not correct, because output data are scaled to use full data range (16 bit).
Thank-you Alex
I have just loaded up an image and I see the WB_Coeffs array. Elements 1, 3, 4, 10, 11 and 14 are filled, which presumably correspond to the 6 options of my EOS 40D (daylight, cloudy, shade, tungsten, fluorescent, flash). That's everything I needed.
Thanks for the excellent info.
Phil
1st:
With LibRaw 0.18 (currently in Beta1, Beta2 to be published soon),
in-camera WB multipliers are extracted into
imgdata.color.WB_Coeffs[256][4];
and
imgdata.colr.WBCT_Coeffs[64][5];
This extraction is made for all RAWs with embedded color presets, not only Canons.
1st array (WB_Coeffs) is indexed with EXIF Colorsource, so to get, for example, D65 preset, you need to inspect WB_Coeffs[21] for non-zero values.
2nd array (WBCT_Coeffs) is filled from 0 to 63:
WBCT_Coeffs[i][0] is color temperature (from camera settings)
and
WBCT_Coeffs[i][1..4] are WB coeffs.
There is no automated procedure to copy these values into processing, so one needs to examine array data (or, simply, fill WB drop-down with these values), than copy needed WB Coeffs into
imgdata.params.user_mul[] to use on dcraw_process() stage.
So, processing sequence is
LibRaw::open_file();
LibRaw::unpack();
... examine WB_Coeffs/WBCT_Coeffs
.. copy needed into user_mul
LibRaw::dcraw_process();
(and, sure, you may call dcraw_process() multiple times without unpack() of same file)
2nd:
WB Coeffs to CCT and back is not very complex (look into Adobe DNG SDK sources for sample code), but results of this procedure depends on color profile and camera calibration data used.
So, calculated CCT/Tint will, most likely, not compatible with other programs (or, even, with in-camera WB setting by CCT).
Pages