I am using the proposed one from the documenation with ./configure and make. I have modified the makefile but it have not helped. Can I specify it as an option for the ./configure or how can I use ./configure and enable X3F support ?
Lightroom does not utilize ICC profiles for camera color calibration. It only accepts DCP.
ICC and DCP profiles are similar, but DCPs can be more sophisticated. I'm going to quote from the dcamprof documentation (open source command line utility for making color profiles)
"The forward matrix which operates in D50 XYZ space using D50 as the reference illuminant is not unique to DNG profiles, it’s used for ICC profiles too."
"The color matrix is... DNG-specific, it’s used for estimating the temperature and tint of the scene illuminant based on a white balance setting."
Both ICC profiles and DCP profiles can contain LUTs. These are color look up tables.
DCP profiles can contain two types: HueSatMap tables and LookTables. HSM tables basically do additional color correction. LookTables apply a second set of adjustments on top of that. Sometimes there is only an HSM or LookTable. Sometimes there is also only a color matrix in DCPs. They can be designed in a nubmer of ways.
I do not know what you're doing to convert rawdata into cv:Mat
What I see on screenshot is 3xUINT8 data type. That probably wrong because source (bayer) image contains single component per pixel.
Thank you very much!
I successfully received pictures from raw_image, but they are small. As I understand I received thumbnails, which raw contains. How I can access the full format picture?
I received such pictures: https://postimg.cc/gr5k2Kft
Depending of file internals type (bayer, or full color), only one pointer in imgdata.rawdata is non-NULL:
raw_image - for bayer or X-Trans (color filter array) images
color3_image - for full-color (Foveon, linear DNG) images if 3-channel data extractor was used
color4_image - for full-color images with 4-channel data extractor
(if color4_image is not-NULL, this does not always mean you have 4-channel input, it could-be 3-channeld and zeroes in 4th channel, inspect imgdata.idata.colors for that).
And similar three pointers for floating point input: float_image, float3_image, float4_image.
So, your code should check all six pointers (or only three if floating point input is not expected) and select image type based on 'what pointer is non-NULL'
Adobe uses own raw processing engine, different from LibRaw. There is no known rule(s) to convert Adobe sliders into LibRaw's imgdata.params
Also, I do not think that altering imgdata.image values without some knowledge of LibRaw::dcraw_process internals is a good idea. You may either study it (it is opensource, so it is possible), or create own raw processing.
Sure, they have their own engine. ;)
But childish me thought just use the expousre-algorithm 2^exp on the imgdata.image values and done...
Phew.... thats a lot of text. I'll have to read it in a quite moment.
But never the less you helped me a lot.
Thank you very much, Alex!!!
imgdata.image[] is processing array. After dcraw_process() it contains 16-bit linear data in [0..2] components and some garbabe in [3]
To implement exposure correction one may use params settings as described in library manual
-- Alex Tutubalin
Yes, I know there are some processing tools alredy...
Exposure was just an example. And I allready found other programs using LibRaw for opening raws for postprocessing. But I couldn't figure out wich data they use to process yet. Because using the final RGB data gives just limited processing possibilities. That was the reason for extracting all these values to an xls.
Or in other words to make it maybe more clear for me:
When I use the PS/LR sliders for exposre, shadows, highlights, sharpness, etc, which is the "corresponding LibRaw data" they "would" use for processing? Does that question make sense to you? Do these sliders use the final RGB values you helped me with, or do they use other data, like the .imgdata.image?
So the .image->data[] is the final RGB array, with the selected WB.
But, then I still don't get what imgdata.image[][] is for?
To wich of these fields do I have to apply my exposure compensation "pixel *= 2^EV"?
Or, in other words, wich of these two fileds do I have to apply my processing?
I am using the proposed one from the documenation with ./configure and make. I have modified the makefile but it have not helped. Can I specify it as an option for the ./configure or how can I use ./configure and enable X3F support ?
I do not know what build method you use (providede makefiles, or configure, or qmake pro files, there are many)
Add -DUSE_X3FTOOLS to C and C++ compiler flags
Thank you for the hint and I have seen this but cannot find the place in the makefile where to place USE_X3FTOOLS.
Can you help me out ?
Thank you in advance.
Lightroom does not utilize ICC profiles for camera color calibration. It only accepts DCP.
ICC and DCP profiles are similar, but DCPs can be more sophisticated. I'm going to quote from the dcamprof documentation (open source command line utility for making color profiles)
"The forward matrix which operates in D50 XYZ space using D50 as the reference illuminant is not unique to DNG profiles, it’s used for ICC profiles too."
"The color matrix is... DNG-specific, it’s used for estimating the temperature and tint of the scene illuminant based on a white balance setting."
Both ICC profiles and DCP profiles can contain LUTs. These are color look up tables.
DCP profiles can contain two types: HueSatMap tables and LookTables. HSM tables basically do additional color correction. LookTables apply a second set of adjustments on top of that. Sometimes there is only an HSM or LookTable. Sometimes there is also only a color matrix in DCPs. They can be designed in a nubmer of ways.
If you want to experiment with making your own color profiles, download dcamprof, rawtherapee, buy a color checker, and follow these instructions https://rawpedia.rawtherapee.com/How_to_create_DCP_color_profiles
As stated in LibRaw 0.20 release notes (https://www.libraw.org/news/libraw-0-20-2-Release )
you need to define USE_X3FTOOLS while building LibRaw
X3F support is not enabled by default.
I do not know what you're doing to convert rawdata into cv:Mat
What I see on screenshot is 3xUINT8 data type. That probably wrong because source (bayer) image contains single component per pixel.
Thank you very much!
I successfully received pictures from raw_image, but they are small. As I understand I received thumbnails, which raw contains. How I can access the full format picture?
I received such pictures:
https://postimg.cc/gr5k2Kft
Depending of file internals type (bayer, or full color), only one pointer in imgdata.rawdata is non-NULL:
raw_image - for bayer or X-Trans (color filter array) images
color3_image - for full-color (Foveon, linear DNG) images if 3-channel data extractor was used
color4_image - for full-color images with 4-channel data extractor
(if color4_image is not-NULL, this does not always mean you have 4-channel input, it could-be 3-channeld and zeroes in 4th channel, inspect imgdata.idata.colors for that).
And similar three pointers for floating point input: float_image, float3_image, float4_image.
So, your code should check all six pointers (or only three if floating point input is not expected) and select image type based on 'what pointer is non-NULL'
LibRaw (post)processing is very similar to dcraw.c postprocessing.
In order not to rewrite the same thing many times, this post-processing is explaned in detail here: https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html
Thanks for your help, Alex. We'll try to figure out how to apply such a variable contrast transformation ourselves then.
Thank you. I have checked and the options are already set in the actual make file, so it should work.
Adobe uses own raw processing engine, different from LibRaw. There is no known rule(s) to convert Adobe sliders into LibRaw's imgdata.params
Also, I do not think that altering imgdata.image values without some knowledge of LibRaw::dcraw_process internals is a good idea. You may either study it (it is opensource, so it is possible), or create own raw processing.
LibRaw's dcraw_process() is very similar to dcraw.c processing, so you may find this link useful: https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html
Thank you!!!
Sure, they have their own engine. ;)
But childish me thought just use the expousre-algorithm 2^exp on the imgdata.image values and done...
Phew.... thats a lot of text. I'll have to read it in a quite moment.
But never the less you helped me a lot.
Thank you very much, Alex!!!
Sorry, you allready did. But it did not show up on the page. :)
imgdata.image[] is processing array. After dcraw_process() it contains 16-bit linear data in [0..2] components and some garbabe in [3]
To implement exposure correction one may use params settings as described in library manual
-- Alex Tutubalin
Yes, I know there are some processing tools alredy...
Exposure was just an example. And I allready found other programs using LibRaw for opening raws for postprocessing. But I couldn't figure out wich data they use to process yet. Because using the final RGB data gives just limited processing possibilities. That was the reason for extracting all these values to an xls.
Or in other words to make it maybe more clear for me:
When I use the PS/LR sliders for exposre, shadows, highlights, sharpness, etc, which is the "corresponding LibRaw data" they "would" use for processing? Does that question make sense to you? Do these sliders use the final RGB values you helped me with, or do they use other data, like the .imgdata.image?
Regards.
>>> Can you help me with the first RGBG values? What are they for then?
Sorry, could not understand this question.
imgdata.image[] is processing array. After dcraw_process() it contains 16-bit linear data in [0..2] components and some garbabe in [3]
To implement exposure correction one may use params settings as described in library manual
Ahh, thanks.
So, I assume I use the wrong settings.... When I just use the camera WB, I should get the same as PS with camera WB, right?
Can you help me with the first RGBG values? What are they for then?
Regards.
Also I've modified your sample with commenting out params settings section, from
to (including) custom gamma section.
After that, 1st line of output is:
Same 54,64,14, as expected (if compiled with -std=c++17)
Thank you very much!!!!
So the .image->data[] is the final RGB array, with the selected WB.
But, then I still don't get what imgdata.image[][] is for?
To wich of these fields do I have to apply my exposure compensation "pixel *= 2^EV"?
Or, in other words, wich of these two fileds do I have to apply my processing?
Regards.
Thanks.
I've added this to the mem_image sample:
just after these lines:
Output: PIX0: image: 1603/2064/330 mem_image: 54/64/14
The last triple is similar with your Photoshop numbers. So, it looks like your printing code is wrong.
Yellow cast is because Daylight WB is used (unless use_camera_wb is not set)
Sure, here you are:
https://filebin.net/jw2qi5a46ybu5cny/Bild.cr2?t=expg9qj0
Regards.
RAW file, of course. It looks like I need to repeat your experiment with your input data.
Which file do you mean? The Source or resulting images?
And I'm using C++17 in VS2019 Community.
Could you please share the sample file?
Pages