Yes, this camera uses E-M5MarkII string in EXIF 0x0110 tag (Model).
LibRaw 0.20 (Beta1 just published) provides 'E-M5 Mark II' string in normalized_model[]
BTW, LibRaw::cameraList() is 'user readable', it contains some notes and special cases (like 'lossless only' or 'CHDK hack'). It does not targeted to be matched against model/normalized_model.
Yes, black/cblack reflects different black level specifications in RAW metadata
black - single black level
cblack[0-3] per channel BL in channel numbering order (channel numbers are returned by COLOR(..))
cblack[4-5] defines pattern BL (e.g. 6x6 for X-Trans)
As a rule: black is base level and cblack is correction, so if black is non-zero and cblack[.. is non-zero (for given channel or row/col), resulting BL for pixel/channel is a sum of black + cblack[channel] + cblack[pattern]. In practice all three are rarely present.
white is either format maximum (defined by bits per pixel), or maximum value defined by metadata (DNG whitelevel).
linear_max is 'specular white' read from metadata (if present in file)
Yeah, the weird histogram is from the inverted data. I think I'm going to have to deeply familiarize myself with what scale_colors does, because I think something in that process is relying on values that were generated prior to inverting the raw data.
Alex,
So it looks like cmatrix and ccm are also for converting from camera space to sRGB. Correct?
The ccm appears to be only read from the file and not computed and it appears to be normalized by sum(imgdata.colors.ccm).
cmatrix also appears to be either read from the file or derived from camera properties.
So, if these are not available in the file, I can assume this to be all zeros?
Is it safe to assume that cam_xyz and rgb_cam are sufficient for most color space transformations?
* rgb_cam for direct conversion to sRGB?
* pseudoinverse(cam_xyz) to move to XYZ space and then apply any suitable transformation to move to a colorspace such as Rec.2020 or ProPhoto of my choice?
Also, a quick question about wb_coeffs. Are the coeffs in the order of the CFA pattern OR are they always R G1 B G2?
But I think the auto scaling is having some issues scaling the channels correctly. I think it's probably because the upper and lower bounds of each channel were calculated with the un-inverted negative?
Alex,
I am not clear if my earlier follow up was actually posted. So apologies for a repeated post.
The link you had sent me was really helpful. I think I now understand how the cam_xyz, rgb_cam, cam_mul and pre_mul matrices work.
However, the code referenced in the link appears to be older and hence does not contain any reference to ccm. Also, the link does not contain any information/comments about cmatrix.
I just noticed from the latest code that appears to populate cmatrix primarily through calls to cam_xyz_coeff.
Would appreciate any help about these two matrices (ccm and cmatrix)?
no_auto_scale=1 is very special use parameter. If set, it will instruct dcraw_process() to skip scale_colors() step (this step performs WB and data scaling). This is not intended to use with standard LibRaw's interpolation methods, this parameter is targeted if one implements own interpolation (demosaic) that wants to deal with unchanged (not scaled) source data.
It looks like we need to document no_auto_scale better in LibRaw docs :)
To manual control output brightness, use no_auto_bright=1 and change imgdata.params.bright from 1 to something.
I tried auto white balancing and it went a long way toward fixing the issues. I did have to manually reduce the brightness though. Is there a way to control the brightness adjustment in ppm_tiff_writer?
I'm also noticing that the red and blue channels appear very compressed in the output image relative to the green channel. I'm not sure what would cause that.
[edit] I set no_auto_bright = 1 and no_auto_scale = 1 and that seems to have helped. Not sure why one of those would have compressed one channel and not the others. I think I'll probably need to do some custom analysis of the negative to set the white balance and scale of the resulting image. Something about inverting the values seems to throw off the automatic processes.
Ah, ok, that makes sense, thanks for the explanation. I guess I'll have to do something more sophisticated than just inverting the wb coefficients, because the output channels are getting cut off on the high end. Thanks for explaining that!
I want to try to invert the RAW data, rather than the processed data, for a few reasons. The first is that I eventually want to make a tool that outputs a RAW file (using the DNG SDK). While it is possible to invert a negative in most RAW editors by inverting the tone curve, it makes the resulting image difficult to work with, and being able to work with the inverted image in RAW would make things a lot easier.
The second reason is a bit more abstract, and is somewhat of an experiment. The way that the different color emulsion layers work in a negative—specifically the orange mask that negative the color leakage in the magenta and cyan channels—means that for any operation on those channel to work correct, you need to operate in the negative's native color space. The matrix multiplication that converts the RAW data into a color space alters the curves in the individual color channels in a way that makes inverting and cancelling the effect of the orange mask difficult. I'm guessing that the demosaic process might screw with the color channels from the negative as well.
Finally, the ultimate goal of all of this is building a tool that will take 3 RAW exposures—one of each color channel—created by taking a photo of the negative with red, green and blue light, and compositing them together. The tool will then invert the image, white balance and output to RAW.
A lot of this process is derived from the way film scanners work. Most high end film scanners user RGB light sources and black and white sensors to read the three color channels and account for the orange mask by adjusting the analog gain per channel.
1/0.418985 == 2.38672/1.0 (G/R coeffs ratio). Or, in other words, dcraw_process scales WB coeffs to get full data range after WB applied (and scaled at some time).
BTW, I still not sure that RAW value inversion leads in the right direction. Why not invert (w/ mask subtraction) image after raw processing applied?
So I've done a bit of work trying to figure out how to invert the white balance of my image, but I'm seeing some counterintuitive behavior from the white balance process, and I'm hoping you can provide me some insight into what's going on. I compiled libRaw with DCRAW_VERBOSE enabled to get some transparency into what white balance coefficients were actually getting used, and this is what I'm observing.
If I make no change to any white balance settings, I get: multipliers 2.858569 1.000000 1.338857 1.000000
If I run this loop, which inverts the white balance value for each channel:
for(int i = 0;i < 3; i++) {
if (ImageProcessor.imgdata.color.cam_mul[i] != 0) {
ImageProcessor.imgdata.params.user_mul[i] = 1 / ImageProcessor.imgdata.color.cam_mul[i];
}
}
I can see that my user_mul values are set correctly, but the draw_process uses different values for some reason.
I even tried manually setting the user_mul values, but got the same result. I'm sort of at a loss for where the multipliers that get used are coming from.
We use this site engine (drupal 7) for many sites for many years. Never seen such problem.
Hi!
Is it possible to try Windows binaries of v0.20 beta?
Or when binaries will be available?
Thanks.
Yes, this camera uses E-M5MarkII string in EXIF 0x0110 tag (Model).
LibRaw 0.20 (Beta1 just published) provides 'E-M5 Mark II' string in normalized_model[]
BTW, LibRaw::cameraList() is 'user readable', it contains some notes and special cases (like 'lossless only' or 'CHDK hack'). It does not targeted to be matched against model/normalized_model.
Yes, black/cblack reflects different black level specifications in RAW metadata
black - single black level
cblack[0-3] per channel BL in channel numbering order (channel numbers are returned by COLOR(..))
cblack[4-5] defines pattern BL (e.g. 6x6 for X-Trans)
As a rule: black is base level and cblack is correction, so if black is non-zero and cblack[.. is non-zero (for given channel or row/col), resulting BL for pixel/channel is a sum of black + cblack[channel] + cblack[pattern]. In practice all three are rarely present.
white is either format maximum (defined by bits per pixel), or maximum value defined by metadata (DNG whitelevel).
linear_max is 'specular white' read from metadata (if present in file)
Yeah, the weird histogram is from the inverted data. I think I'm going to have to deeply familiarize myself with what scale_colors does, because I think something in that process is relying on values that were generated prior to inverting the raw data.
wb_coeffs are in channel numbering order (as returned by COLOR(row,col))
Alex,
So it looks like cmatrix and ccm are also for converting from camera space to sRGB. Correct?
The ccm appears to be only read from the file and not computed and it appears to be normalized by sum(imgdata.colors.ccm).
cmatrix also appears to be either read from the file or derived from camera properties.
So, if these are not available in the file, I can assume this to be all zeros?
Is it safe to assume that cam_xyz and rgb_cam are sufficient for most color space transformations?
* rgb_cam for direct conversion to sRGB?
* pseudoinverse(cam_xyz) to move to XYZ space and then apply any suitable transformation to move to a colorspace such as Rec.2020 or ProPhoto of my choice?
Also, a quick question about wb_coeffs. Are the coeffs in the order of the CFA pattern OR are they always R G1 B G2?
Regards,
Dinesh
Is this histogram is from inverted data?
If so, I could not answer the question, I simply do not know what happens w/ processing if channels are inverted.
Here are histograms of your tether8387 file processed with
1) dcraw_emu -T -W -a -b 1 (Tiff, no_auto_bright,auto-wb, bright=1): https://www.dropbox.com/s/6l67ry7y03xlj7p/screenshot%202020-05-04%2009.0...
2) same, but -b 0.8: https://www.dropbox.com/s/7o1w581gamxy4nz/screenshot%202020-05-04%2009.0...
I do not see anything not expected here.
I
So I've made some progress by setting:
use_auto_wb = 1
no_auto_bright = 1
bright = 0.8
But I think the auto scaling is having some issues scaling the channels correctly. I think it's probably because the upper and lower bounds of each channel were calculated with the un-inverted negative?
This is the original histogram from RawDigger: http://ur.sine.com/temp/tether8387-Full-8000x5320.png
...and this is the histogram from the generated TIFF from Rawtherapee: http://ur.sine.com/temp/post-process-channels.png
Is there a good way to have libRaw recalculate the per-channel min and max, or alternatively, to set them manually?
Accidentally replied to the wrong thread!
Please see https://www.libraw.org/news/libraw-snapshot-201910
I hope you support Canon Eos RP raw format (cr3).
cmatrix is camera color data similar to rgb_cam, but calculated from in-RAW color data.
ccm is also 'camera color matrix' retireved from RAW in 'as is' (excl. normalizing) form.
Alex,
I am not clear if my earlier follow up was actually posted. So apologies for a repeated post.
The link you had sent me was really helpful. I think I now understand how the cam_xyz, rgb_cam, cam_mul and pre_mul matrices work.
However, the code referenced in the link appears to be older and hence does not contain any reference to ccm. Also, the link does not contain any information/comments about cmatrix.
I just noticed from the latest code that appears to populate cmatrix primarily through calls to cam_xyz_coeff.
Would appreciate any help about these two matrices (ccm and cmatrix)?
Dinesh
no_auto_scale=1 is very special use parameter. If set, it will instruct dcraw_process() to skip scale_colors() step (this step performs WB and data scaling). This is not intended to use with standard LibRaw's interpolation methods, this parameter is targeted if one implements own interpolation (demosaic) that wants to deal with unchanged (not scaled) source data.
It looks like we need to document no_auto_scale better in LibRaw docs :)
To manual control output brightness, use no_auto_bright=1 and change imgdata.params.bright from 1 to something.
I tried auto white balancing and it went a long way toward fixing the issues. I did have to manually reduce the brightness though. Is there a way to control the brightness adjustment in ppm_tiff_writer?
I'm also noticing that the red and blue channels appear very compressed in the output image relative to the green channel. I'm not sure what would cause that.
[edit] I set no_auto_bright = 1 and no_auto_scale = 1 and that seems to have helped. Not sure why one of those would have compressed one channel and not the others. I think I'll probably need to do some custom analysis of the negative to set the white balance and scale of the resulting image. Something about inverting the values seems to throw off the automatic processes.
I will take a look at that link. I will get back to you if I have any questions.
Dinesh
LibRaw's postprocessing is mostly similar with dcraw.c's one (that's why postprocessing procedure is called dcraw_process())
So, most of your questions are answered in 'dcraw annotated' site: https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html
For DNG-specific processing (color data extraction etc) look into DNG specs
1) imgdata.params.output_color=0 will output resulting demosaiced image in 'camera' color space.
3) highlight clipping may occur because
a) red or blue channel is clipped due to WB
b) or auto-brightness applied in ppm_tiff_writer or make_mem_image
Ah, ok, that makes sense, thanks for the explanation. I guess I'll have to do something more sophisticated than just inverting the wb coefficients, because the output channels are getting cut off on the high end. Thanks for explaining that!
I want to try to invert the RAW data, rather than the processed data, for a few reasons. The first is that I eventually want to make a tool that outputs a RAW file (using the DNG SDK). While it is possible to invert a negative in most RAW editors by inverting the tone curve, it makes the resulting image difficult to work with, and being able to work with the inverted image in RAW would make things a lot easier.
The second reason is a bit more abstract, and is somewhat of an experiment. The way that the different color emulsion layers work in a negative—specifically the orange mask that negative the color leakage in the magenta and cyan channels—means that for any operation on those channel to work correct, you need to operate in the negative's native color space. The matrix multiplication that converts the RAW data into a color space alters the curves in the individual color channels in a way that makes inverting and cancelling the effect of the orange mask difficult. I'm guessing that the demosaic process might screw with the color channels from the negative as well.
Finally, the ultimate goal of all of this is building a tool that will take 3 RAW exposures—one of each color channel—created by taking a photo of the negative with red, green and blue light, and compositing them together. The tool will then invert the image, white balance and output to RAW.
A lot of this process is derived from the way film scanners work. Most high end film scanners user RGB light sources and black and white sensors to read the three color channels and account for the orange mask by adjusting the analog gain per channel.
1/0.418985 == 2.38672/1.0 (G/R coeffs ratio). Or, in other words, dcraw_process scales WB coeffs to get full data range after WB applied (and scaled at some time).
BTW, I still not sure that RAW value inversion leads in the right direction. Why not invert (w/ mask subtraction) image after raw processing applied?
So I've done a bit of work trying to figure out how to invert the white balance of my image, but I'm seeing some counterintuitive behavior from the white balance process, and I'm hoping you can provide me some insight into what's going on. I compiled libRaw with DCRAW_VERBOSE enabled to get some transparency into what white balance coefficients were actually getting used, and this is what I'm observing.
If I make no change to any white balance settings, I get: multipliers 2.858569 1.000000 1.338857 1.000000
If I run this loop, which inverts the white balance value for each channel:
I can see that my user_mul values are set correctly, but the draw_process uses different values for some reason.
user_mul: 0.418985 1.000000 0.599532 0.000000
multipliers 1.000000 2.386720 1.430915 2.386720
I even tried manually setting the user_mul values, but got the same result. I'm sort of at a loss for where the multipliers that get used are coming from.
Yes.
And many (not all) color matrices in adobe_coeff() are 9-items (3x3), not 12 (4x3).
Also, for 3-color images 4th component is moved to image[][2] before interpolation step.
Alex,
Thanks for the clarification. Does it mean that if colors == 3, then the cam_xyz is effectively 3x3 matrix?
Dinesh
colors is color (component) count really :). It may be 3 or 4 even for RGB(G) files if two greens are different.
For 2x2 bayer images, color (index) is:
(imgdata.idata.filters >> (((row << 1 & 14) | (col & 1)) << 1) & 3);
Pages