Hey Alex, thanks for the reply! I'm aware that I will have to adjust for the mask and for the low contrast in the negative. At this point I just want to get the inversion step right so that I can be a little more certain that my overall approach will work. I updated my code as you suggested, but I'm still getting an entirely magenta image. Here's my loop:
1st:
raw_image[] values are 'as decoded from RAW file', so black level is not subtracted.
If you're making 'in house' application, so the only camera you plan to use is A7R-4, the exact black level for this camera is 512.
so inverted_value = maximum - original_value + 512 (maximum is also has black level within it).
If you plan to use multiple cameras, look into src/preprocessing/subtract_black.cpp source for black subtraction code and into src/utils/utils_libraw.cpp:adjust_bl() function for additional details.
2nd: If you're processing color negatives, inversion in linear domain will not result in acceptable image color because of color masking used in that negatives ( http://www.brianpritchard.com/why_colour_negative_is_orange.htm )
Also, for both color and BW negatives you'll need to adjust contrast.
It works using your suggestion. Thank you. Do you happen to know what is the exact mathematical formula f(x) for the gamma correction and how the two following parameters are used ?
imgdata.params.gamm[0]
imgdata.params.gamm[1]
I reckon the first one is the power and the second one is the toe. I have no clues for the second one. Also I suppose that the variable x range should be between 0 and 1. I may have to reuse the gamma correction in a separate process later down the processing pipeline. Thank you very much.
This is for some special case, asked by our users (I can’t remember the specific details), not targeted for 'normal' use, so not documented in details.
The documentation could be better, it's true :)
P.S. Yes, scale_colors() do range scale and WB in single step.
I guess what confused me was that scale_colors() also includes (or rather excludes when disabled) white balance, which is usually important in exactly the advanced interpolation use case you described... no biggie, because the advanced user could easily do the white balance before their custom interpolation, it was just unexpected/undocumented, so at least making it somewhat more explicit in the API docs would help avoid having to find out the hard way ;)
image[][4] is used for both intermediate results and for final result.
After dcraw_proces() is called,
image[i][0..2] contains final image in 16-bit and in linear space for i-th pixel
image[i][3] contains some garbage (not used intermediate results, etc).
Also, image[] is not rotated (if rotation is set via metadata or via user_flip)
make_mem_image prepares final result: 3 components per pixel, gamma corrected (according to imgdata.params.gamm[]), 8 or 16 bit (according to params.output_bps)
Thanks again for the information. So except for the difference in size in one case the gamma correction and some other processes occurs when the tiff is created (with dcraw_process() ) and dcraw_make_mem_image() really produces the final processed pixel data.
Alright thank you. So no easy way except using an external library then. Also I wanted to ask. What exactly is the difference between the RawProcessor.imgdata.image pointer obtained after dcraw_process() and dcraw_make_mem_image(). Is there a memory copy done or is it the same pointer ?
Use LibRaw::dcraw_make_mem_image(): https://www.libraw.org/docs/API-CXX.html#dcraw_make_mem_image to create in-memory RGB data array, than write it to preferred file format (e.g. tiff) using preferred library (e.g. libtiff) with preferred options.
Libraw just delivers decoded raw data to prosessing application. Processing (e.g. demosaicing, white balancing, channel mixing, etc, while demosaicing is not needed for Foveon data) is performed in calling application (e.g. Affinity).
LibRaw contains some postprocessing code (derived from dcraw), it is not intended to use in any professional-level application, this is mostly 'proof of concept'. We do not have any plans to change LibRaw postprocessing code.
In the interim, and for the sake of progress, I will assume that the LibRaw library permits opening an X3F file from the indicated cameras, and provides Affinity the option to request linear or log brightness levels for each RGB channel, including the ability to not apply any demosaicing, sharpening or noise-removal.
That would explain how assembling a 'monochrome' image is entirely up to the raw developer.
In the only descriptive observation I have been able to obtain, Affinity indicated that
"...following one of your previous forum posts, and according to `LibRAW`, your images are colour images.". With no other elaboration, they have confirmed that they know exactly what the problem is.
I will ask Affinity to reach out again.
Much appreciated.
I'm just a user stuck with a long-outstanding issue, trying to be as supportive as possible to reach a resolution, even if that's a kluge I need to incorporate. I will do whatever it takes (short of changing the camera system, or having to use LightRoom). I haven't gotten anywhere in a frustratingly long time. Anything constructive you can provide will be hugely appreciated.
Thanks!
This whole thing doesn't look right. Raw converter developers are supposed to open an issue with us if there is a bug in libraw raw data decoding. I don't see such an issue re Sigma, including opened by the Affinity team. That means the ball is in their court.
Your request doesn't make it clear what needs to be fixed on our side. We deliver the raw data, the rest is up to the raw converter programmers.
deejjjaaaa: yes, you're right, that link was one of about 20+ ongoing posts since December 2019 on the Affinity Photo support forum.
Alex: I apologize, as I can only trust what the authoritative people each tell me, and Affinity (as clearly as they seem to be able to) say they have escalated this to LibRaw, and that it is a LibRaw issue, having nothing to do with them. Pretty well the same as LibRaw feels it is not in their power to remedy this, but rather Affinity's.
Since I seem to have a "he said - he said' issue, would you be kind enough to provide any constructive steps to isolate whom I can work with to resolve this?
There's a good number of users for these 5 cameras with the same issues, and since this sensor's image processing works similarly to prior models, it might hopefully be something reproducible. I've lost nearly a year and a half unsuccessfully trying to obtain support on this one issue from Affinity, so anything that 'moves the needle forward' will be hugely appreciated!
Please also note that LibRaw will typically scale your raw values between the black and white levels during the linearization step (e.g. 65535*(x - black)/(white-black)), so you might get larger values even for gamma=1.0.
Unfortunately I found that if you set no_auto_scale as well to disable this, you also kill LibRaw's white balance for some reason, which is typically not good for the demosaicing step, see https://github.com/letmaik/rawpy/issues/101
Hey Alex, thanks for the reply! I'm aware that I will have to adjust for the mask and for the low contrast in the negative. At this point I just want to get the inversion step right so that I can be a little more certain that my overall approach will work. I updated my code as you suggested, but I'm still getting an entirely magenta image. Here's my loop:
This is the original images: http://ur.sine.com/temp/original.png
And this is the output of that code: http://ur.sine.com/temp/output.png
1st:
raw_image[] values are 'as decoded from RAW file', so black level is not subtracted.
If you're making 'in house' application, so the only camera you plan to use is A7R-4, the exact black level for this camera is 512.
so inverted_value = maximum - original_value + 512 (maximum is also has black level within it).
If you plan to use multiple cameras, look into src/preprocessing/subtract_black.cpp source for black subtraction code and into src/utils/utils_libraw.cpp:adjust_bl() function for additional details.
2nd: If you're processing color negatives, inversion in linear domain will not result in acceptable image color because of color masking used in that negatives ( http://www.brianpritchard.com/why_colour_negative_is_orange.htm )
Also, for both color and BW negatives you'll need to adjust contrast.
Thanks. I was able to find the function in curves.cpp. Its more complicated that I thought.
I m guessing that gamma[0] is pwr and gamma[1] is ts. I don't know what mode is but 1 seems to work for me.
Look into gamma_curve() code
It works using your suggestion. Thank you. Do you happen to know what is the exact mathematical formula f(x) for the gamma correction and how the two following parameters are used ?
I reckon the first one is the power and the second one is the toe. I have no clues for the second one. Also I suppose that the variable x range should be between 0 and 1. I may have to reuse the gamma correction in a separate process later down the processing pipeline. Thank you very much.
This is for some special case, asked by our users (I can’t remember the specific details), not targeted for 'normal' use, so not documented in details.
The documentation could be better, it's true :)
P.S. Yes, scale_colors() do range scale and WB in single step.
I guess what confused me was that scale_colors() also includes (or rather excludes when disabled) white balance, which is usually important in exactly the advanced interpolation use case you described... no biggie, because the advanced user could easily do the white balance before their custom interpolation, it was just unexpected/undocumented, so at least making it somewhat more explicit in the API docs would help avoid having to find out the hard way ;)
That's very useful. Thanks for the info.
image[][4] is used for both intermediate results and for final result.
After dcraw_proces() is called,
image[i][0..2] contains final image in 16-bit and in linear space for i-th pixel
image[i][3] contains some garbage (not used intermediate results, etc).
Also, image[] is not rotated (if rotation is set via metadata or via user_flip)
make_mem_image prepares final result: 3 components per pixel, gamma corrected (according to imgdata.params.gamm[]), 8 or 16 bit (according to params.output_bps)
Thanks again for the information. So except for the difference in size in one case the gamma correction and some other processes occurs when the tiff is created (with dcraw_process() ) and dcraw_make_mem_image() really produces the final processed pixel data.
imgdata.image is 4-component (per pixel) array: https://www.libraw.org/docs/API-datastruct.html#libraw_data_t
dcraw_make_mem_image will create 3-component (and gamma corrected) array): https://www.libraw.org/docs/API-datastruct.html#libraw_processed_image_t
(I also suggest to read docs and samples source code)
Alright thank you. So no easy way except using an external library then. Also I wanted to ask. What exactly is the difference between the RawProcessor.imgdata.image pointer obtained after dcraw_process() and dcraw_make_mem_image(). Is there a memory copy done or is it the same pointer ?
LibRaw' tiff writer is very simplified.
Use LibRaw::dcraw_make_mem_image(): https://www.libraw.org/docs/API-CXX.html#dcraw_make_mem_image to create in-memory RGB data array, than write it to preferred file format (e.g. tiff) using preferred library (e.g. libtiff) with preferred options.
Libraw just delivers decoded raw data to prosessing application. Processing (e.g. demosaicing, white balancing, channel mixing, etc, while demosaicing is not needed for Foveon data) is performed in calling application (e.g. Affinity).
LibRaw contains some postprocessing code (derived from dcraw), it is not intended to use in any professional-level application, this is mostly 'proof of concept'. We do not have any plans to change LibRaw postprocessing code.
Your demarcation is essentially correct.
Thanks again, Iliah.
In the interim, and for the sake of progress, I will assume that the LibRaw library permits opening an X3F file from the indicated cameras, and provides Affinity the option to request linear or log brightness levels for each RGB channel, including the ability to not apply any demosaicing, sharpening or noise-removal.
That would explain how assembling a 'monochrome' image is entirely up to the raw developer.
In the only descriptive observation I have been able to obtain, Affinity indicated that
"...following one of your previous forum posts, and according to `LibRAW`, your images are colour images.". With no other elaboration, they have confirmed that they know exactly what the problem is.
I will ask Affinity to reach out again.
Much appreciated.
I see. Thank you very much.
IIQ S is not (publicly) documented and not reverse engineered (at least, there is no any opensource decoders).
We do not expect IIQ S support in foreseeable future.
Dear Sir:
Without clear communication from Affinity we can't help, I'm afraid. We simply don't understand the issue they are having.
Again, my apologies Iliah.
I'm just a user stuck with a long-outstanding issue, trying to be as supportive as possible to reach a resolution, even if that's a kluge I need to incorporate. I will do whatever it takes (short of changing the camera system, or having to use LightRoom). I haven't gotten anywhere in a frustratingly long time. Anything constructive you can provide will be hugely appreciated.
Thanks!
There are two options for (no) auto-scaling:
1) params.no_auto_bright - disables ETTR(-like) automated brightness correction, entire image is scaled to 65535/(metadata_derived_maximum-black) instead of 65535/(real_data_max_by_histogram - black).
2) params.no_auto_scale - disables entire scale_colors() call (for example, to get not modified data in image array).
Second case is special use (e.g. someone may want to do own interpolation via callback and want to see unchanged data on this step).
In (normal) processing case scaling is significant to get same scaled data from all different sensor bit-counts.
This whole thing doesn't look right. Raw converter developers are supposed to open an issue with us if there is a bug in libraw raw data decoding. I don't see such an issue re Sigma, including opened by the Affinity team. That means the ball is in their court.
Your request doesn't make it clear what needs to be fixed on our side. We deliver the raw data, the rest is up to the raw converter programmers.
deejjjaaaa: yes, you're right, that link was one of about 20+ ongoing posts since December 2019 on the Affinity Photo support forum.
Alex: I apologize, as I can only trust what the authoritative people each tell me, and Affinity (as clearly as they seem to be able to) say they have escalated this to LibRaw, and that it is a LibRaw issue, having nothing to do with them. Pretty well the same as LibRaw feels it is not in their power to remedy this, but rather Affinity's.
Since I seem to have a "he said - he said' issue, would you be kind enough to provide any constructive steps to isolate whom I can work with to resolve this?
There's a good number of users for these 5 cameras with the same issues, and since this sensor's image processing works similarly to prior models, it might hopefully be something reproducible. I've lost nearly a year and a half unsuccessfully trying to obtain support on this one issue from Affinity, so anything that 'moves the needle forward' will be hugely appreciated!
All my best,
DLJ
Please also note that LibRaw will typically scale your raw values between the black and white levels during the linearization step (e.g. 65535*(x - black)/(white-black)), so you might get larger values even for gamma=1.0.
Unfortunately I found that if you set no_auto_scale as well to disable this, you also kill LibRaw's white balance for some reason, which is typically not good for the demosaicing step, see https://github.com/letmaik/rawpy/issues/101
dcraw_ppm_tiff_writer() applies gamma curve on write (imgdata.image[] is linear, while output is usually not).
To set gamma curve to linear use
imgdata.params.gamm[0] = imgdata.params.gamm[1] = 1.0;
Thanx.
I still do not see anything that should be *fixed* on our side (hope Affinity postprocessing is own, not demo code provided by our library)
Pages