You already have a
white balance matrix for the 60Da in LibRaw, the problem is that it isn't selected because the camera model is reported in the raw file as EOS 60D *NOT* EOS 60Da.
I found the problem! It wasn't my code at all. It relates to the choice of the correct White Balance to use for a Canon EOS 60Da (the same is likely true of the Canon EOS Ra and the Nikon D810A).
If I process the data using “Daylight White Balance”, the Daylight White Balance values used are those that are appropriate for the EOS 60D (not the 60Da) because the camera model reported in the Raw files is set to EOS 60D (oops – that’s a firmware bug).
In this case all images acquire a very distinct red cast which is really *not* what you want at all!
If you set the camera to use Daylight White Balance and process that data using the Camera White Balance values then the correct white balance for an EOS60Da is applied.
I found the problem! It wasn't my code at all. It relates to the choice of the correct White Balance to use for a Canon EOS 60Da (the same is likely true of the Canon EOS Ra and the Nikon D810A).
If I process the data using “Daylight White Balance”, the Daylight White Balance values used are those that are appropriate for the EOS 60D (not the 60Da) because the camera model reported in the Raw files is set to EOS 60D (oops – that’s a firmware bug).
In this case all images acquire a very distinct red cast which is really *not* what you want at all!
If you set the camera to use Daylight White Balance and process that data using the Camera White Balance values then the correct white balance for an EOS60Da is applied.
I totally understand your position on this. I'll ask on the General section if there's anyone who would be prepared to assist.
As to what I am trying to achieve - it is simply to decode the raw file into my internal format such that when it is displayed I get the same visual results that other sw does - right now the images are too red.
Unless there's a compact, self-contained example that can be compiled independently of your environment, your macros, and so on, you shouldn't expect a meaningful answer.
Especially since there's no question. "What am I doing wrong" is meaningless without the context of "what do I want to achieve?" (That's unclear after a quick review of your code, and expecting someone to spend hours on it is also pretty unreasonable.)
Alex,
If you can't see my code, then you can't advise me what I am doing wrong. I left the comments there deliberately, as IMHO they should assist in understanding what I am doing
I imagine that this has to do with my code that processes the white balance but I don't know for sure. If I knew exactly where in my code the problem was I would of course post just that small section, but as I don't know where I am going wrong, I felt it much better to provide more information rather than less.
If you think this isn't a question for LibRaw technical support, then should I post this in the General section? If not there then where else should I go for assistance?
A long piece of code, 2/3 of which is some kind of debugging or verification stuff or comments, and the question "what's wrong here" is clearly not a question for LibRaw support.
We do not know. Too much extra details not related to the question itself.
Try asking your question in 20 times more compact form, in this case there is at least a chance that a person (and not a compiler or debugger) will understand at first glance what is written (and what is wrong).
code display got messed up at the end. Here's what I hope is not messad up.
// No Auto White Balance
O.use_auto_wb = false;
//
// Do we disable all White Balance processing?
//
const auto setWbMult = [&](const float factor)
{
O.user_mul[0] = factor;
O.user_mul[1] = factor;
O.user_mul[2] = factor;
O.user_mul[3] = factor;
};
if (workspace.value("RawDDP/NoWB", false).toBool())
setWbMult(1.0f); // Yes, so set the user white balance multipliers to 1.0
else
setWbMult(0.0f); // No, so set the user white balance multipliers to 0.0
O.use_camera_wb = workspace.value("RawDDP/CameraWB", false).toBool() ? 1 : 0;
// Don't stretch or rotate raw pixels (equivalent to dcraw -j)
O.use_fuji_rotate = 0;
// Don't flip the image (equivalent to dcraw -t 0)
O.user_flip = 0;
// Output color space : raw-> sRGB (default)
/*
argv[argc] = _T("-o");
argc++;
argv[argc] = _T("0");
argc++;*/
O.user_black = workspace.value("RawDDP/BlackPointTo0", false).toBool() ? 0 : -1;
// Output is 16 bits (equivalent of dcraw flag -4)
O.gamm[0] = O.gamm[1] = O.no_auto_bright = 1;
O.output_bps = 16;
g_Progress = pProgress;
ZTRACE_RUNTIME("Calling LibRaw::unpack()");
if ((ret = rawProcessor.unpack()) != LIBRAW_SUCCESS)
{
bResult = false;
ZTRACE_RUNTIME("Cannot unpack %s: %s", file.generic_u8string().c_str(), libraw_strerror(ret));
}
if (!bResult)
break;
//
// Create the class that populates the bitmap
//
CopyableSmartPtr<BitmapFillerInterface> pFiller = BitmapFillerInterface::makeBitmapFiller(pBitmap, pProgress);
// Get the Colour Filter Array type and set into the bitmap filler
m_CFAType = GetCurrentCFAType();
pFiller->SetCFAType(m_CFAType);
#define RAW(row,col) raw_image[(row) * S.width + (col)]
ZTRACE_DEVELOP("Extracting real image data (excluding the frame) from rawProcessor.imgdata.rawdata.raw_image");
//
// This is a regular RAW file so no Fuji Super-CCD stuff
//
// Just copy the "real image" portion of the data excluding
// the frame
//
#pragma omp parallel for default(shared) if(numberOfProcessors > 1)
for (int row = 0; row < S.height; row++)
{
for (int col = 0; col < S.width; col++)
{
RAW(row, col) = RawData.raw_image[(row + S.top_margin) * S.raw_pitch / 2 + (col + S.left_margin)];
}
}
//
// Now process the data that raw_image points to which is either
//
// 1) The output of post processing the Fuji Super-CCD raw,
// stored in the USHORT array hung off raw_image, or
//
// 2) Normal common or garden raw Bayer matrix data that's been
// copied from RawData.raw_image to raw_image (less the frame)
//
// Either way we should now be processing a regular greyscale 16-bit
// pixel array which has an associated Bayer Matrix or is true monochrome
//
pFiller->setGrey(true);
pFiller->setWidth(S.width);
pFiller->setHeight(S.height);
pFiller->setMaxColors((1 << 16) - 1);
// Report User Black Point over-ride
if (0 == O.user_black)
ZTRACE_RUNTIME("User set Black Point to 0");
//
// Before doing dark subtraction, normalise C.black / C.cblack[]
//
ZTRACE_DEVELOP("Before adjust_bl() C.black = %d.", C.black);
ZTRACE_DEVELOP("First 10 C.cblack elements\n %d, %d, %d, %d\n %d, %d\n %d, %d, %d, %d",
C.cblack[0], C.cblack[1], C.cblack[2], C.cblack[3],
C.cblack[4], C.cblack[5],
C.cblack[6], C.cblack[7], C.cblack[8], C.cblack[9]);
rawProcessor.adjust_bl();
//
// This code is based on code from LibRaw Version 19.2, specifically method:
//
// int LibRaw::subtract_black_internal()
//
// found at line 4532 in source file libraw_cxx.cpp
//
// Do dark subtraction on the image. If a user defined black level has
// been set (it will be zero) then use that, otherwise just use the black
// level for the camera.
//
// Note that this is only done on real image data, not the frame
//
// While doing so collect the largest value in the image data.
//
ZTRACE_DEVELOP("Subtracting black level of C.black = %d from raw_image data.", C.black);
ZTRACE_DEVELOP("First 10 C.cblack elements\n %d, %d, %d, %d\n %d, %d\n %d, %d, %d, %d",
C.cblack[0], C.cblack[1], C.cblack[2], C.cblack[3],
C.cblack[4], C.cblack[5],
C.cblack[6], C.cblack[7], C.cblack[8], C.cblack[9]);
const int size = static_cast<int>(S.height) * static_cast<int>(S.width);
if (!rawProcessor.is_phaseone_compressed() && (C.cblack[0] || C.cblack[1] || C.cblack[2] || C.cblack[3] || (C.cblack[4] && C.cblack[5])))
{
const int cblk[4] = { static_cast<int>(C.cblack[0]), static_cast<int>(C.cblack[1]), static_cast<int>(C.cblack[2]), static_cast<int>(C.cblack[3]) };
int dmax = 0; // Maximum value of pixels in entire image.
int lmax = 0; // Local (or Loop) maximum value found in the 'for' loops below. For OMP.
if (C.cblack[4] && C.cblack[5])
{
#pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1)
{
#pragma omp for
for (int i = 0; i < size; i++)
{
int val = raw_image[i];
val -= C.cblack[6 + i / S.width % C.cblack[4] * C.cblack[5] + i % S.width % C.cblack[5]];
val -= cblk[i & 3];
raw_image[i] = static_cast<std::uint16_t>(std::clamp(val, 0, 65535));
lmax = std::max(val, lmax);
}
#pragma omp critical
dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax.
}
}
else
{
#pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1)
{
#pragma omp for
for (int i = 0; i < size; i++)
{
int val = raw_image[i];
val -= cblk[i & 3];
raw_image[i] = static_cast<std::uint16_t>(std::clamp(val, 0, 65535));
lmax = std::max(val, lmax);
}
#pragma omp critical
dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax.
}
}
C.data_maximum = dmax & 0xffff;
C.maximum -= C.black;
memset(&C.cblack, 0, sizeof(C.cblack)); // Yeah, we used cblack[6+] values too!
C.black = 0;
}
else
{
// Nothing to Do, maximum is already calculated, black level is 0, so no change
// only calculate channel maximum;
unsigned int dmax = 0; // Maximum value of pixels in entire image.
unsigned int lmax = 0; // For OpenMP.
#pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1)
{
#pragma omp for
for (int i = 0; i < size; i++)
lmax = std::max(static_cast<unsigned int>(raw_image[i]), lmax);
#pragma omp critical
dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax.
}
C.data_maximum = dmax;
}
//
// The image data needs to be scaled to the "white balance co-efficients"
// Currently do not handle "Auto White Balance"
//
float pre_mul[4] = { 0.0, 0.0, 0.0, 0.0 };
if (1 == O.user_mul[0])
{
static_assert(sizeof(O.user_mul) >= sizeof(pre_mul));
ZTRACE_RUNTIME("No White Balance processing.");
memcpy(pre_mul, O.user_mul, sizeof(pre_mul));
}
else if (1 == O.use_camera_wb && -1 != C.cam_mul[0])
{
static_assert(sizeof(C.cam_mul) >= sizeof(pre_mul));
ZTRACE_RUNTIME("Using Camera White Balance (as shot).");
memcpy(pre_mul, C.cam_mul, sizeof(pre_mul));
}
else
{
static_assert(sizeof(C.pre_mul) >= sizeof(pre_mul));
ZTRACE_RUNTIME("Using Daylight White Balance.");
memcpy(pre_mul, C.pre_mul, sizeof(pre_mul));
}
ZTRACE_RUNTIME("White balance co-efficients being used are %f, %f, %f, %f",
pre_mul[0], pre_mul[1], pre_mul[2], pre_mul[3]);
if (0 == pre_mul[3])
pre_mul[3] = P1.colors < 4 ? pre_mul[1] : 1;
//
// Now apply a linear stretch to the raw data, scale to the "saturation" level
// not to the value of the pixel with the greatest value (which may be higher
// than the saturation level).
//
// const double dmax = *std::max_element(&pre_mul[0], &pre_mul[4]);
const float dmin = *std::ranges::min_element(pre_mul);
const float saturationScaling = 65535.0f / static_cast<float>(C.maximum);
std::array<float, 8> scale_mul = { 0.0f, 0.0f, 0.0f, 0.0f, saturationScaling, saturationScaling, saturationScaling, saturationScaling };
for (int q = 0; q < 4; q++)
scale_mul[q] = (pre_mul[q] /= dmin) * saturationScaling;
ZTRACE_DEVELOP("Maximum value pixel has value %d", C.data_maximum);
ZTRACE_DEVELOP("Saturation level is %d", C.maximum);
ZTRACE_DEVELOP("Applying linear stretch to raw data. Scale values %f, %f, %f, %f %f, %f, %f, %f",
scale_mul[0], scale_mul[1], scale_mul[2], scale_mul[3], scale_mul[4], scale_mul[5], scale_mul[6], scale_mul[7]);
#pragma omp parallel default(shared) if(numberOfProcessors > 1) // No OPENMP: 240ms, with OPENMP: 92ms
{
#pragma omp master // There is no implied barrier.
ZTRACE_RUNTIME("RAW file processing with %d OpenMP threads, little_endian is %s", omp_get_num_threads(), littleEndian ? "true" : "false");
#pragma omp for
for (int row = 0; row < S.height; row++)
{
for (int col = 0; col < S.width; col++)
{
// What colour will this pixel become
const int colour = rawProcessor.COLOR(row, col);
const float val = scale_mul[colour] * static_cast<float>(RAW(row, col));
RAW(row, col) = static_cast<std::uint16_t>(std::clamp(static_cast<int>(val), 0, 65535));
}
}
}
The docs are maybe a little confusing since they seem to say that iheight includes stretching:
iheight/iwidth: Size of the output image (may differ from height/width for cameras that require image rotation or have non-square pixels).
Yes, there's potential for confusion here, since the data usage paths may differ.
Some might use LibRaw::raw2image() and then process the pixels (already composed into 4-component pixels) themselves, accessing imgdata.image.
In this case, iwidth and iheight are correct (describibg imgdat.image[][] dimensions), even when using half-interpolation.
And some might use LibRaw::dcraw_process() and then NOT use dcraw_make_mem_image() for 3-component output array generation, but directly access the 4-component one.
I agree, iwidth/iheight should be adjusted internally to match new imgdata.image dimenistions if LibRaw::stretch() is used internally.
Most likely, all old library users have long since solved this problem (depending on how they use the data), but probably every new user has a chance of stumbling in this place.
I don't know what your code does, so I don't know what you should do.
If you're doing your own post-processing, you're probably using decoded RAW data only. In that case, it's worth considering pixel_aspect on final rendering
If you're using LibRaw post-processing (LibRaw::dcraw_process) then LibRaw::dcraw_make_mem_image() will do all needed tricks.
In fact, the only difference is errcnt variable is declared openmp-shared (it is atomically changed if error catched, so the difference should be neglible).
I can only recommend performing detailed profiling of both versions and comparing where exactly you're experiencing performance degradation at the individual operator level.
Since I don't see any performance differences on our end, there's nothing to look for there.
We've explained width/height and iwidth/iheight values on different file processing stages: https://www.libraw.org/docs/API-datastruct-eng.html#size_explanation
There is indeed a lot of archeology here and some things need to be corrected (see the TODO section in the added text)
You're welcome, D.
Thank you very much.
Here's a sample file:
https://www.dropbox.com/scl/fi/v2nvbxjuspr9lf87fbyjh/L_M81_0244_ISO800_6...
David
Please let us see a file, and we will see what we can do about the issue.
You already have a
white balance matrix for the 60Da in LibRaw, the problem is that it isn't selected because the camera model is reported in the raw file as EOS 60D *NOT* EOS 60Da.
Great to hear that the problem source has been identified.
Could you please share (any) example file(s) from this camera with us (to add correct color matrix in LibRaw).
Please use any file sharing service (e.g. WeTransfer/free option) and send link to info@libraw.org
I found the problem! It wasn't my code at all. It relates to the choice of the correct White Balance to use for a Canon EOS 60Da (the same is likely true of the Canon EOS Ra and the Nikon D810A).
If I process the data using “Daylight White Balance”, the Daylight White Balance values used are those that are appropriate for the EOS 60D (not the 60Da) because the camera model reported in the Raw files is set to EOS 60D (oops – that’s a firmware bug).
In this case all images acquire a very distinct red cast which is really *not* what you want at all!
If you set the camera to use Daylight White Balance and process that data using the Camera White Balance values then the correct white balance for an EOS60Da is applied.
I found the problem! It wasn't my code at all. It relates to the choice of the correct White Balance to use for a Canon EOS 60Da (the same is likely true of the Canon EOS Ra and the Nikon D810A).
If I process the data using “Daylight White Balance”, the Daylight White Balance values used are those that are appropriate for the EOS 60D (not the 60Da) because the camera model reported in the Raw files is set to EOS 60D (oops – that’s a firmware bug).
In this case all images acquire a very distinct red cast which is really *not* what you want at all!
If you set the camera to use Daylight White Balance and process that data using the Camera White Balance values then the correct white balance for an EOS60Da is applied.
David
'Other software' usually does some post-processing (black subtraction, white balance, demosaicing, tone curve...)
I totally understand your position on this. I'll ask on the General section if there's anyone who would be prepared to assist.
As to what I am trying to achieve - it is simply to decode the raw file into my internal format such that when it is displayed I get the same visual results that other sw does - right now the images are too red.
Unless there's a compact, self-contained example that can be compiled independently of your environment, your macros, and so on, you shouldn't expect a meaningful answer.
Especially since there's no question. "What am I doing wrong" is meaningless without the context of "what do I want to achieve?" (That's unclear after a quick review of your code, and expecting someone to spend hours on it is also pretty unreasonable.)
Alex,
If you can't see my code, then you can't advise me what I am doing wrong. I left the comments there deliberately, as IMHO they should assist in understanding what I am doing
I imagine that this has to do with my code that processes the white balance but I don't know for sure. If I knew exactly where in my code the problem was I would of course post just that small section, but as I don't know where I am going wrong, I felt it much better to provide more information rather than less.
If you think this isn't a question for LibRaw technical support, then should I post this in the General section? If not there then where else should I go for assistance?
Thanks, David
A long piece of code, 2/3 of which is some kind of debugging or verification stuff or comments, and the question "what's wrong here" is clearly not a question for LibRaw support.
We do not know. Too much extra details not related to the question itself.
Try asking your question in 20 times more compact form, in this case there is at least a chance that a person (and not a compiler or debugger) will understand at first glance what is written (and what is wrong).
code display got messed up at the end. Here's what I hope is not messad up.
// No Auto White Balance O.use_auto_wb = false; // // Do we disable all White Balance processing? // const auto setWbMult = [&](const float factor) { O.user_mul[0] = factor; O.user_mul[1] = factor; O.user_mul[2] = factor; O.user_mul[3] = factor; }; if (workspace.value("RawDDP/NoWB", false).toBool()) setWbMult(1.0f); // Yes, so set the user white balance multipliers to 1.0 else setWbMult(0.0f); // No, so set the user white balance multipliers to 0.0 O.use_camera_wb = workspace.value("RawDDP/CameraWB", false).toBool() ? 1 : 0; // Don't stretch or rotate raw pixels (equivalent to dcraw -j) O.use_fuji_rotate = 0; // Don't flip the image (equivalent to dcraw -t 0) O.user_flip = 0; // Output color space : raw-> sRGB (default) /* argv[argc] = _T("-o"); argc++; argv[argc] = _T("0"); argc++;*/ O.user_black = workspace.value("RawDDP/BlackPointTo0", false).toBool() ? 0 : -1; // Output is 16 bits (equivalent of dcraw flag -4) O.gamm[0] = O.gamm[1] = O.no_auto_bright = 1; O.output_bps = 16; g_Progress = pProgress; ZTRACE_RUNTIME("Calling LibRaw::unpack()"); if ((ret = rawProcessor.unpack()) != LIBRAW_SUCCESS) { bResult = false; ZTRACE_RUNTIME("Cannot unpack %s: %s", file.generic_u8string().c_str(), libraw_strerror(ret)); } if (!bResult) break; // // Create the class that populates the bitmap // CopyableSmartPtr<BitmapFillerInterface> pFiller = BitmapFillerInterface::makeBitmapFiller(pBitmap, pProgress); // Get the Colour Filter Array type and set into the bitmap filler m_CFAType = GetCurrentCFAType(); pFiller->SetCFAType(m_CFAType); #define RAW(row,col) raw_image[(row) * S.width + (col)] ZTRACE_DEVELOP("Extracting real image data (excluding the frame) from rawProcessor.imgdata.rawdata.raw_image"); // // This is a regular RAW file so no Fuji Super-CCD stuff // // Just copy the "real image" portion of the data excluding // the frame // #pragma omp parallel for default(shared) if(numberOfProcessors > 1) for (int row = 0; row < S.height; row++) { for (int col = 0; col < S.width; col++) { RAW(row, col) = RawData.raw_image[(row + S.top_margin) * S.raw_pitch / 2 + (col + S.left_margin)]; } } // // Now process the data that raw_image points to which is either // // 1) The output of post processing the Fuji Super-CCD raw, // stored in the USHORT array hung off raw_image, or // // 2) Normal common or garden raw Bayer matrix data that's been // copied from RawData.raw_image to raw_image (less the frame) // // Either way we should now be processing a regular greyscale 16-bit // pixel array which has an associated Bayer Matrix or is true monochrome // pFiller->setGrey(true); pFiller->setWidth(S.width); pFiller->setHeight(S.height); pFiller->setMaxColors((1 << 16) - 1); // Report User Black Point over-ride if (0 == O.user_black) ZTRACE_RUNTIME("User set Black Point to 0"); // // Before doing dark subtraction, normalise C.black / C.cblack[] // ZTRACE_DEVELOP("Before adjust_bl() C.black = %d.", C.black); ZTRACE_DEVELOP("First 10 C.cblack elements\n %d, %d, %d, %d\n %d, %d\n %d, %d, %d, %d", C.cblack[0], C.cblack[1], C.cblack[2], C.cblack[3], C.cblack[4], C.cblack[5], C.cblack[6], C.cblack[7], C.cblack[8], C.cblack[9]); rawProcessor.adjust_bl(); // // This code is based on code from LibRaw Version 19.2, specifically method: // // int LibRaw::subtract_black_internal() // // found at line 4532 in source file libraw_cxx.cpp // // Do dark subtraction on the image. If a user defined black level has // been set (it will be zero) then use that, otherwise just use the black // level for the camera. // // Note that this is only done on real image data, not the frame // // While doing so collect the largest value in the image data. // ZTRACE_DEVELOP("Subtracting black level of C.black = %d from raw_image data.", C.black); ZTRACE_DEVELOP("First 10 C.cblack elements\n %d, %d, %d, %d\n %d, %d\n %d, %d, %d, %d", C.cblack[0], C.cblack[1], C.cblack[2], C.cblack[3], C.cblack[4], C.cblack[5], C.cblack[6], C.cblack[7], C.cblack[8], C.cblack[9]); const int size = static_cast<int>(S.height) * static_cast<int>(S.width); if (!rawProcessor.is_phaseone_compressed() && (C.cblack[0] || C.cblack[1] || C.cblack[2] || C.cblack[3] || (C.cblack[4] && C.cblack[5]))) { const int cblk[4] = { static_cast<int>(C.cblack[0]), static_cast<int>(C.cblack[1]), static_cast<int>(C.cblack[2]), static_cast<int>(C.cblack[3]) }; int dmax = 0; // Maximum value of pixels in entire image. int lmax = 0; // Local (or Loop) maximum value found in the 'for' loops below. For OMP. if (C.cblack[4] && C.cblack[5]) { #pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1) { #pragma omp for for (int i = 0; i < size; i++) { int val = raw_image[i]; val -= C.cblack[6 + i / S.width % C.cblack[4] * C.cblack[5] + i % S.width % C.cblack[5]]; val -= cblk[i & 3]; raw_image[i] = static_cast<std::uint16_t>(std::clamp(val, 0, 65535)); lmax = std::max(val, lmax); } #pragma omp critical dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax. } } else { #pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1) { #pragma omp for for (int i = 0; i < size; i++) { int val = raw_image[i]; val -= cblk[i & 3]; raw_image[i] = static_cast<std::uint16_t>(std::clamp(val, 0, 65535)); lmax = std::max(val, lmax); } #pragma omp critical dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax. } } C.data_maximum = dmax & 0xffff; C.maximum -= C.black; memset(&C.cblack, 0, sizeof(C.cblack)); // Yeah, we used cblack[6+] values too! C.black = 0; } else { // Nothing to Do, maximum is already calculated, black level is 0, so no change // only calculate channel maximum; unsigned int dmax = 0; // Maximum value of pixels in entire image. unsigned int lmax = 0; // For OpenMP. #pragma omp parallel default(shared) firstprivate(lmax) if(numberOfProcessors > 1) { #pragma omp for for (int i = 0; i < size; i++) lmax = std::max(static_cast<unsigned int>(raw_image[i]), lmax); #pragma omp critical dmax = std::max(lmax, dmax); // For non-OMP case this is equal to dmax = lmax. } C.data_maximum = dmax; } // // The image data needs to be scaled to the "white balance co-efficients" // Currently do not handle "Auto White Balance" // float pre_mul[4] = { 0.0, 0.0, 0.0, 0.0 }; if (1 == O.user_mul[0]) { static_assert(sizeof(O.user_mul) >= sizeof(pre_mul)); ZTRACE_RUNTIME("No White Balance processing."); memcpy(pre_mul, O.user_mul, sizeof(pre_mul)); } else if (1 == O.use_camera_wb && -1 != C.cam_mul[0]) { static_assert(sizeof(C.cam_mul) >= sizeof(pre_mul)); ZTRACE_RUNTIME("Using Camera White Balance (as shot)."); memcpy(pre_mul, C.cam_mul, sizeof(pre_mul)); } else { static_assert(sizeof(C.pre_mul) >= sizeof(pre_mul)); ZTRACE_RUNTIME("Using Daylight White Balance."); memcpy(pre_mul, C.pre_mul, sizeof(pre_mul)); } ZTRACE_RUNTIME("White balance co-efficients being used are %f, %f, %f, %f", pre_mul[0], pre_mul[1], pre_mul[2], pre_mul[3]); if (0 == pre_mul[3]) pre_mul[3] = P1.colors < 4 ? pre_mul[1] : 1; // // Now apply a linear stretch to the raw data, scale to the "saturation" level // not to the value of the pixel with the greatest value (which may be higher // than the saturation level). // // const double dmax = *std::max_element(&pre_mul[0], &pre_mul[4]); const float dmin = *std::ranges::min_element(pre_mul); const float saturationScaling = 65535.0f / static_cast<float>(C.maximum); std::array<float, 8> scale_mul = { 0.0f, 0.0f, 0.0f, 0.0f, saturationScaling, saturationScaling, saturationScaling, saturationScaling }; for (int q = 0; q < 4; q++) scale_mul[q] = (pre_mul[q] /= dmin) * saturationScaling; ZTRACE_DEVELOP("Maximum value pixel has value %d", C.data_maximum); ZTRACE_DEVELOP("Saturation level is %d", C.maximum); ZTRACE_DEVELOP("Applying linear stretch to raw data. Scale values %f, %f, %f, %f %f, %f, %f, %f", scale_mul[0], scale_mul[1], scale_mul[2], scale_mul[3], scale_mul[4], scale_mul[5], scale_mul[6], scale_mul[7]); #pragma omp parallel default(shared) if(numberOfProcessors > 1) // No OPENMP: 240ms, with OPENMP: 92ms { #pragma omp master // There is no implied barrier. ZTRACE_RUNTIME("RAW file processing with %d OpenMP threads, little_endian is %s", omp_get_num_threads(), littleEndian ? "true" : "false"); #pragma omp for for (int row = 0; row < S.height; row++) { for (int col = 0; col < S.width; col++) { // What colour will this pixel become const int colour = rawProcessor.COLOR(row, col); const float val = scale_mul[colour] * static_cast<float>(RAW(row, col)); RAW(row, col) = static_cast<std::uint16_t>(std::clamp(static_cast<int>(val), 0, 65535)); } } }Please pay attention to our update policy: https://www.libraw.org/#updatepolicy
No support listed for this model yet, but RawDugger lists support as of February, and the release page says:
if our RawDigger announced at our blog, https://www.rawdigger.com/news, supports the camera / format, that's because LibRaw supports it.
Is it supported but omitted, or is support coming in the next release?
Yes, there's potential for confusion here, since the data usage paths may differ.
Some might use LibRaw::raw2image() and then process the pixels (already composed into 4-component pixels) themselves, accessing imgdata.image.
In this case, iwidth and iheight are correct (describibg imgdat.image[][] dimensions), even when using half-interpolation.
And some might use LibRaw::dcraw_process() and then NOT use dcraw_make_mem_image() for 3-component output array generation, but directly access the 4-component one.
I agree, iwidth/iheight should be adjusted internally to match new imgdata.image dimenistions if LibRaw::stretch() is used internally.
Most likely, all old library users have long since solved this problem (depending on how they use the data), but probably every new user has a chance of stumbling in this place.
I don't know what your code does, so I don't know what you should do.
If you're doing your own post-processing, you're probably using decoded RAW data only. In that case, it's worth considering pixel_aspect on final rendering
If you're using LibRaw post-processing (LibRaw::dcraw_process) then LibRaw::dcraw_make_mem_image() will do all needed tricks.
So I should set my image height to `iheight / pixel_aspect`? Thanks!
The docs are maybe a little confusing since they seem to say that iheight includes stretching:
iheight/iwidth: Size of the output image (may differ from height/width for cameras that require image rotation or have non-square pixels).
I tried in libraw 0.22.1 and see the same behaviour.
Here's the issue where the problem was reported:
https://github.com/libvips/libvips/issues/5000
LibRaw applies pixel_aspect on postprocessing stage (LibRaw::stretch() is called in LibRaw::dcraw_process())
stretch() call adjusts height or width value.
Great to know that the problem has been resolved
For the files you provide and openmp enabled the difference is neglible:
Also, diff in src/decoders/fuji_compressed.cpp is very small (it changes error handling in openmp case):
diff --git a/src/decoders/fuji_compressed.cpp b/src/decoders/fuji_compressed.cpp index acea0825..40d92d78 100644 --- a/src/decoders/fuji_compressed.cpp +++ b/src/decoders/fuji_compressed.cpp @@ -229,9 +229,9 @@ static inline void fuji_fill_buffer(fuji_compressed_block *info) { if (info->cur_pos >= info->cur_buf_size) { + bool needthrow = false; info->cur_pos = 0; info->cur_buf_offset += info->cur_buf_size; - bool needthrow = false; #ifdef LIBRAW_USE_OPENMP #pragma omp critical #endif @@ -1155,14 +1155,16 @@ void LibRaw::fuji_decode_loop(fuji_compressed_params *common_info, int count, IN const int lineStep = (libraw_internal_data.unpacker_data.fuji_total_lines + 0xF) & ~0xF; #ifdef LIBRAW_USE_OPENMP unsigned errcnt = 0; -#pragma omp parallel for private(cur_block) +#pragma omp parallel for private(cur_block) shared(errcnt) #endif for (cur_block = 0; cur_block < count; cur_block++) { - try{ + try + { fuji_decode_strip(common_info, cur_block, raw_block_offsets[cur_block], block_sizes[cur_block], q_bases ? q_bases + cur_block * lineStep : 0); - } catch (...) + } + catch (...) { #ifdef LIBRAW_USE_OPENMP #pragma omp atomicIn fact, the only difference is errcnt variable is declared openmp-shared (it is atomically changed if error catched, so the difference should be neglible).
I can only recommend performing detailed profiling of both versions and comparing where exactly you're experiencing performance degradation at the individual operator level.
Since I don't see any performance differences on our end, there's nothing to look for there.
Followup:
compiled with clang 19.1.7
Compilation flags: -O3 -fopenmp
Pages