Why is that uncalled for? Given that those multipliers are clearly used to set the default pre_mul, that seemed like a sensible suggestion to me. There was no criticism.
You can either use the numbers from "Makernotes 'D65' WB multipliers: 665 302 406 302" line which is also above; or you may want to familiarize yourself with the procedure that results in calculation of "Derived D65 multipliers" and insert a CM2 into your copy of LibRaw.
"the code could attempt to default to calculating the D65 co-efficients instead of setting them to 1.0." is uncalled for.
I know that 0.19.3 doesn't officially support the X-T3, but I think the code could attempt to default to calculating the D65 co-efficients instead of setting them to 1.0.
I am happy that it is working as I expected with a raw file from an X-T1!
Short answer:
- you need to subtract black level values from (unaltered) RAW values
- than multiply to normalized per-channel WB coefficients.
Long(er) answer: use subtract_black(), scale_colors() and pre_inteprolate() functions code as a reference, these functions are called before interpolation (demosaic) call to perform data ajustments before debayering.
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>dcraw_emu -v -v -r 1.0 1.0 1.0 1.0 "DSCF3954(X-Trans).RAF"
Processing file DSCF3954(X-Trans).RAF
Reading metadata finished
Starting Reading RAW data (expecting 2 iterations)
Reading RAW data finished
Starting Scaling colors (expecting 2 iterations)
Scaling colors finished
Starting Pre-interpolating (expecting 2 iterations)
Pre-interpolating finished
Starting Converting to RGB (expecting 2 iterations)
Converting to RGB finished
Writing file DSCF3954(X-Trans).RAF.ppm
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>rename "DSCF3954(X-Trans).RAF.ppm" "DSCF3954(X-Trans).RAF.NoWB.ppm"
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>dcraw_emu -v -v "DSCF3954(X-Trans).RAF"
Processing file DSCF3954(X-Trans).RAF
Reading metadata finished
Starting Reading RAW data (expecting 2 iterations)
Reading RAW data finished
Starting Scaling colors (expecting 2 iterations)
Scaling colors finished
Starting Pre-interpolating (expecting 2 iterations)
Pre-interpolating finished
Starting Converting to RGB (expecting 2 iterations)
Converting to RGB finished
Writing file DSCF3954(X-Trans).RAF.ppm
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>fc "DSCF3954(X-Trans).RAF.NoWB.ppm" "DSCF3954(X-Trans).RAF.ppm"
Comparing files DSCF3954(X-Trans).RAF.NoWB.ppm and DSCF3954(X-TRANS).RAF.PPM
FC: no differences encountered
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>
I can now decode images from Canon DSLRs that only show the active area. My test code now looks like this:
m_raw_proc->open_buffer(data, size);
m_raw_proc->unpack();
int pos = 0;
int top_margin = m_raw_proc->imgdata.sizes.top_margin;
int left_margin = m_raw_proc->imgdata.sizes.left_margin;
int raw_pitch = m_raw_proc->imgdata.sizes.raw_pitch / 2;
for (int r = 0; r < m_raw_proc->imgdata.sizes.height; r++)
{
for (int c = 0; c < m_raw_proc->imgdata.sizes.width; c++)
{
buffer[pos] = m_raw_proc->imgdata.rawdata.raw_image[(r + top_margin) * raw_pitch + left_margin + c];
pos++;
}
}
After this code has run, I have undebayered data in buffer[]. This works very well but I have no idea how to apply the in-camera white balance. I would be grateful if you would be able to explain how I apply a white balance or direct me to a resource that explains this. Indeed, is this even possible without using the dcraw functions?
Theoretical part:
1) Many (not all) digital cameras have 'masked' (opaque) pixel areas (or black frame), so image area is less than full sensor area. These black pixels may be used for black level calibration, banding suppression, etc (the area that may be used is specific for camera model, so we do not discuss it now).
2) imgdata.rawdata.raw_image[] array contains full sensor area decoded from RAW files. It need to be cropped on processing to exclude black frame.
There are several variables in imgdata.sizes that describes sensor area:
- raw_width, raw_height - full sensor size.
- raw_pitch - row pitch (in bytes! so divide it by 2 for raw_image, by 6 for color3_image) in rawdata.* pointers. Usually raw_pitch is just raw_width *2, but this not always so (eg. if file decoded via DNG SDK).
- top_margin, left_maring - pixel coordinates for top-left image visible area
- width, height - size of visible area (there is some special case for Fuji Super-CCD sensors used on very old cameras; let's drop it).
So, there are two ways to use:
A. Continue to use imgdata.rawdata.raw_image array w/o copying it into imgdata.image.
You'll need to change your all-pixel loops to something like (i'll skip some imgdata.sizes prefixes to shorten statements...)
// srow - source row, drow - dest row, same for col
B. Use LibRaw::raw2image() call:
This call will allocate imdata.image[..][4] array with 4-components per pixel.
After this call, 3 out of 4 components are zero, and only image[row*width+col][COLOR(row,col)] is not.
If you perform debayering in your own code, raw2image may be not optimal choice because of extra memory use. You may consider de-bayering in place (directly in imgdata.image[][4] array) to save memory.
Feel free to ask if you need additional explanations
My code works for Nikon DSLR cameras via imgdata.rawdata (albeit the images seem to be under-exposed) but images from Canon DSLRs are showing bad data near the top of each image. I've been reading various forum posts and it seems that I need to call raw2image() in order to get data that only contains visible (active pixels) pixels but I am struggling to get unbayered data.
It is my understanding that after calling raw2image() and without calling dcraw_process() I should have an undebayered dataset in imgdata.image[][] which only contains visible pixels, is that correct?
What I need to do is to copy data from imgdata.image[][] to a one dimensional 16-bit integer array (size is width * height) that gets saved as a FITS or a TIFF file which can then be debayered later by my post-processing application.
Here is a code example of what I am currently doing:
if ((ret = m_raw_proc->open_buffer(data, size)) != 0)
{
// Handle error
}
if ((ret = m_raw_proc->unpack() != 0))
{
// Handle error
}
if ((ret = m_raw_proc->raw2image()) != 0)
{
// Handle error
}
m_width = m_raw_proc->imgdata.sizes.iwidth;
m_height = m_raw_proc->imgdata.sizes.iheight;
for (int n = 0; n < m_width * m_height; n++)
{
buffer[n] = m_raw_proc->imgdata.image[n][0];
}
However, when I try to debayer the image in an external application, the code above is heavily bias to a specific primary colour. How do I correctly access the undebayered data from imgdata.image[][]?
Followup:
1) colors are in imdata.idata.cdesc string ('RGBG" in most cases, "CMYG" for some very old cameras, etc)
2) For RGBG (modern bayer), values returned by COLOR():
0 - Red
1 - Green
2 - Blue
3 - Green (for some cameras greens are different by black level/amplification/even color response)
Of course, this colors are repeatable in rows/columns (in 2x2 pattern for normal bayer and in 6x6 for X-Trans), so you do not need to call it on each pixel
Thank you Alex, this is exactly what I wanted. I can now get a raw undebayered buffer from LibRaw and display it on my image preview screen as a greyscale image. Of course, I see the bayer pattern on the preview display. Ultimately, I would like to be able to debayer the buffer using a nearest neighbour algorithm purely for the live preview screen and save a undebayered TIFF files.
This leads me onto my final question. Is it possible for LibRaw to return the camera bayer pattern, for example, RGGB, BGGR, GBGR or GRGB or is this not possible?
Original RAW data (after LibRaw::unpack) is available via imgdata.rawdata. arrays (raw_image - for bayer, X-Trans or monochrome, color3_image for 3-color non-bayer files, color4_image for 4-color ones; Only one of these pointers is non-NULL after unpack() call).
Note: this array(s) contains unpacked RAW pixels without any adustment/cropping:
- masked (black area) pixels are in place, pixel array size is imgdata.sizes.raw_width x raw_height
- black level not subtracted
Hi!
I found the 100 MP sensor is included in LibRaw supported camera list but for different camera models from Phase One. I was working with Phase One iXG 100 MP, and thought give a try how the output look like after default demosaicing. The output is more greenish (which is not correct) in output tiff ( dcraw_emu -v -dcbe -T input.IIQ ).
Any good suggestion on this unsupported camera issue? I have attached the output.
Beyond the core library, you also provide a few command line tools with your installation package.
Could you add support for reporting header details per included raw image if a file contains more than one frame? I hope this way I can find a way to discover if there are differences in the attributes of both contained images, to tell apart "High Resolution" from "Dynamic Range" dual capture raw image files by comparing their attributes.
I believe hardly any software at all is aware of raw image files possibly containing more than one image frame.
Why is that uncalled for? Given that those multipliers are clearly used to set the default pre_mul, that seemed like a sensible suggestion to me. There was no criticism.
David
Dear Sir:
"Notice the last line above"
You can either use the numbers from "Makernotes 'D65' WB multipliers: 665 302 406 302" line which is also above; or you may want to familiarize yourself with the procedure that results in calculation of "Derived D65 multipliers" and insert a CM2 into your copy of LibRaw.
"the code could attempt to default to calculating the D65 co-efficients instead of setting them to 1.0." is uncalled for.
OK I now understand what's happening:
>raw-identify -v "DSCF3954(X-Trans).RAF"
Filename: DSCF3954(X-Trans).RAF
Timestamp: Sat Oct 6 15:56:45 2018
Camera: Fujifilm X-T3 ID: 0x0
Body serial: 8CQ04794
Owner:
:
: stuff deleted
:
Filter pattern: GGGGBRGGGGRBGGGG
Makernotes 'As shot' WB multipliers: 563.000000 302.000000 531.000000 0.000000
Makernotes 'Tungsten' WB multipliers: 367 302 818 302
Makernotes 'Fine Weather' WB multipliers: 546 302 539 302
Makernotes 'Shade' WB multipliers: 595 302 462 302
Makernotes 'Daylight Fluorescent D' WB multipliers: 701 302 471 302
Makernotes 'Cool White Fluorescent W' WB multipliers: 571 302 736 302
Makernotes 'Warm White Fluorescent L' WB multipliers: 596 302 585 302
Makernotes 'Illuminant A' WB multipliers: 367 302 818 302
Makernotes 'D65' WB multipliers: 623 302 476 302
Makernotes 'Camera Auto' WB multipliers: 563 302 531 302
Camera2RGB matrix:
1.0000 0.0000 0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 1.0000
XYZ->CamRGB matrix:
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
Derived D65 multipliers: 1.000000 1.000000 1.000000
Notice the last line above ...
I know that 0.19.3 doesn't officially support the X-T3, but I think the code could attempt to default to calculating the D65 co-efficients instead of setting them to 1.0.
I am happy that it is working as I expected with a raw file from an X-T1!
>raw-identify -v _DSF3925.RAF
Filename: _DSF3925.RAF
Timestamp: Wed Mar 15 20:56:39 2017
Camera: Fujifilm X-T1 ID: 0x0
: Stuff deleted.
Makernotes 'As shot' WB multipliers: 581.000000 302.000000 468.000000 0.000000
Makernotes 'Tungsten' WB multipliers: 389 302 686 302
Makernotes 'Fine Weather' WB multipliers: 581 302 468 302
Makernotes 'Shade' WB multipliers: 642 302 401 302
Makernotes 'Daylight Fluorescent D' WB multipliers: 735 302 410 302
Makernotes 'Cool White Fluorescent W' WB multipliers: 598 302 619 302
Makernotes 'Warm White Fluorescent L' WB multipliers: 622 302 485 302
Makernotes 'Illuminant A' WB multipliers: 389 302 686 302
Makernotes 'D65' WB multipliers: 665 302 406 302
Makernotes 'Camera Auto' WB multipliers: 590 302 468 302
Camera2RGB matrix:
1.6442 -0.5539 -0.0904
-0.1896 1.6455 -0.4559
0.0505 -0.5409 1.4904
XYZ->CamRGB matrix:
0.8458 -0.2451 -0.0855
-0.4597 1.2447 0.2407
-0.1475 0.2482 0.6526
Derived D65 multipliers: 2.147247 0.934710 1.221633
Sorry if this has been a bother.
David
Short answer:
- you need to subtract black level values from (unaltered) RAW values
- than multiply to normalized per-channel WB coefficients.
Long(er) answer: use subtract_black(), scale_colors() and pre_inteprolate() functions code as a reference, these functions are called before interpolation (demosaic) call to perform data ajustments before debayering.
They not only look identical, they are identical:
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>dcraw_emu -v -v -r 1.0 1.0 1.0 1.0 "DSCF3954(X-Trans).RAF"
Processing file DSCF3954(X-Trans).RAF
Reading metadata finished
Starting Reading RAW data (expecting 2 iterations)
Reading RAW data finished
Starting Scaling colors (expecting 2 iterations)
Scaling colors finished
Starting Pre-interpolating (expecting 2 iterations)
Pre-interpolating finished
Starting Converting to RGB (expecting 2 iterations)
Converting to RGB finished
Writing file DSCF3954(X-Trans).RAF.ppm
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>rename "DSCF3954(X-Trans).RAF.ppm" "DSCF3954(X-Trans).RAF.NoWB.ppm"
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>dcraw_emu -v -v "DSCF3954(X-Trans).RAF"
Processing file DSCF3954(X-Trans).RAF
Reading metadata finished
Starting Reading RAW data (expecting 2 iterations)
Reading RAW data finished
Starting Scaling colors (expecting 2 iterations)
Scaling colors finished
Starting Pre-interpolating (expecting 2 iterations)
Pre-interpolating finished
Starting Converting to RGB (expecting 2 iterations)
Converting to RGB finished
Writing file DSCF3954(X-Trans).RAF.ppm
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>fc "DSCF3954(X-Trans).RAF.NoWB.ppm" "DSCF3954(X-Trans).RAF.ppm"
Comparing files DSCF3954(X-Trans).RAF.NoWB.ppm and DSCF3954(X-TRANS).RAF.PPM
FC: no differences encountered
D:\Users\amonra\Documents\Astrophotography\DSS Test Images>
Thanks Alex,
I can now decode images from Canon DSLRs that only show the active area. My test code now looks like this:
After this code has run, I have undebayered data in buffer[]. This works very well but I have no idea how to apply the in-camera white balance. I would be grateful if you would be able to explain how I apply a white balance or direct me to a resource that explains this. Indeed, is this even possible without using the dcraw functions?
Once again, I thank you for your continued help.
Amanda
Theoretical part:
1) Many (not all) digital cameras have 'masked' (opaque) pixel areas (or black frame), so image area is less than full sensor area. These black pixels may be used for black level calibration, banding suppression, etc (the area that may be used is specific for camera model, so we do not discuss it now).
2) imgdata.rawdata.raw_image[] array contains full sensor area decoded from RAW files. It need to be cropped on processing to exclude black frame.
There are several variables in imgdata.sizes that describes sensor area:
- raw_width, raw_height - full sensor size.
- raw_pitch - row pitch (in bytes! so divide it by 2 for raw_image, by 6 for color3_image) in rawdata.* pointers. Usually raw_pitch is just raw_width *2, but this not always so (eg. if file decoded via DNG SDK).
- top_margin, left_maring - pixel coordinates for top-left image visible area
- width, height - size of visible area (there is some special case for Fuji Super-CCD sensors used on very old cameras; let's drop it).
So, there are two ways to use:
A. Continue to use imgdata.rawdata.raw_image array w/o copying it into imgdata.image.
You'll need to change your all-pixel loops to something like (i'll skip some imgdata.sizes prefixes to shorten statements...)
// srow - source row, drow - dest row, same for col
for(srow = imgdata.sizes.top_margin, drow=0; srow <= top_margin + height; srow++, drow++)
for(scol = imgdata.sizes.left_margin, dcol=0; scol <= left_margin+width....
buffer[srow * width + scol] = imgdata.rawdata.raw_image[drow*raw_pitch/2 + dcol];
B. Use LibRaw::raw2image() call:
This call will allocate imdata.image[..][4] array with 4-components per pixel.
After this call, 3 out of 4 components are zero, and only image[row*width+col][COLOR(row,col)] is not.
If you perform debayering in your own code, raw2image may be not optimal choice because of extra memory use. You may consider de-bayering in place (directly in imgdata.image[][4] array) to save memory.
Feel free to ask if you need additional explanations
Further to my original post.
My code works for Nikon DSLR cameras via imgdata.rawdata (albeit the images seem to be under-exposed) but images from Canon DSLRs are showing bad data near the top of each image. I've been reading various forum posts and it seems that I need to call raw2image() in order to get data that only contains visible (active pixels) pixels but I am struggling to get unbayered data.
It is my understanding that after calling raw2image() and without calling dcraw_process() I should have an undebayered dataset in imgdata.image[][] which only contains visible pixels, is that correct?
What I need to do is to copy data from imgdata.image[][] to a one dimensional 16-bit integer array (size is width * height) that gets saved as a FITS or a TIFF file which can then be debayered later by my post-processing application.
Here is a code example of what I am currently doing:
However, when I try to debayer the image in an external application, the code above is heavily bias to a specific primary colour. How do I correctly access the undebayered data from imgdata.image[][]?
Thanks
Amanda
Thanks Alex,
I now have my application working perfectly.
Amanda
Followup:
1) colors are in imdata.idata.cdesc string ('RGBG" in most cases, "CMYG" for some very old cameras, etc)
2) For RGBG (modern bayer), values returned by COLOR():
0 - Red
1 - Green
2 - Blue
3 - Green (for some cameras greens are different by black level/amplification/even color response)
https://www.libraw.org/docs/API-CXX.html#COLOR will return pixel color for (row,col).
Of course, this colors are repeatable in rows/columns (in 2x2 pattern for normal bayer and in 6x6 for X-Trans), so you do not need to call it on each pixel
Thank you Alex, this is exactly what I wanted. I can now get a raw undebayered buffer from LibRaw and display it on my image preview screen as a greyscale image. Of course, I see the bayer pattern on the preview display. Ultimately, I would like to be able to debayer the buffer using a nearest neighbour algorithm purely for the live preview screen and save a undebayered TIFF files.
This leads me onto my final question. Is it possible for LibRaw to return the camera bayer pattern, for example, RGGB, BGGR, GBGR or GRGB or is this not possible?
Once again, thank you.
Amanda
Thank you very much!
dcraw_emu sample supports both TIFF output and cropping.
Original RAW data (after LibRaw::unpack) is available via imgdata.rawdata. arrays (raw_image - for bayer, X-Trans or monochrome, color3_image for 3-color non-bayer files, color4_image for 4-color ones; Only one of these pointers is non-NULL after unpack() call).
Note: this array(s) contains unpacked RAW pixels without any adustment/cropping:
- masked (black area) pixels are in place, pixel array size is imgdata.sizes.raw_width x raw_height
- black level not subtracted
Please wait until public snapshot.
Is the code with CR3 support available in some private branch? I would like to test if it is available.
Could you please provide image sample?
Answered in another sub-thread: https://www.libraw.org/comment/5396#comment-5396
Dear Sir:
could you please share image sample (e.g. upload it to Dropbox/WeTransfer/Mega.NZ/etc and send link to info@libraw.org)
Also, as shot/automatic white balance (-w or -a switch for dcraw_emu) most likely will work.
Hi!
I found the 100 MP sensor is included in LibRaw supported camera list but for different camera models from Phase One. I was working with Phase One iXG 100 MP, and thought give a try how the output look like after default demosaicing. The output is more greenish (which is not correct) in output tiff ( dcraw_emu -v -dcbe -T input.IIQ ).
Any good suggestion on this unsupported camera issue? I have attached the output.
In next public snapshot (this Fall)
Hi,
any idea of when you plan to suport these cameras ?
Thank you.
Best Regards,
Well, one more question...
Beyond the core library, you also provide a few command line tools with your installation package.
Could you add support for reporting header details per included raw image if a file contains more than one frame? I hope this way I can find a way to discover if there are differences in the attributes of both contained images, to tell apart "High Resolution" from "Dynamic Range" dual capture raw image files by comparing their attributes.
I believe hardly any software at all is aware of raw image files possibly containing more than one image frame.
Thanks, problem soved.
Pages