Quote from my last reply in that thread:
1st: the backtraces points to LibRaw::LibRaw (constructor), so the problem is not related to any specific RAW file
2nd: I've tried XCode 6, XCode 8, XCode 9 builds (using make -f Makefile.dist) and was unable to reproduce the problem using both dcraw_emu (single thread) and half_mt (multithreaded). Both samples works fine with DSC_1796.NEF sample (link above). The sample was multiplied into many (same) files: DSC_1796-[0-9].NEF
So, I'm still suspect that this is not LibRaw problem, but not enough stack problem. LibRaw objects are big (e.g. several 16-bit curves), so default stack size could be not enough (it is better to allocate LibRaw object dynamically to avoid that).
Is there any way to see is enough stack space is present [in your app]?
Sorry I think I misunderstood.
The problem is because the error I cannot compile the lib to test it. I thought you told me by removing the flag it would be ok.
Sorry,
Also, I took a look to the code of libtiff that is also included in msys2:
#ifdef __WIN32__
#include <windows.h>
/*
* Open a TIFF file with a Unicode filename, for read/writing.
*/
TIFF*
TIFFOpenW(const wchar_t* name, const char* mode)
{
static const char module[] = "TIFFOpenW";
int m, fd;
int mbsize;
char *mbname;
TIFF* tif;
m = _TIFFgetMode(mode, module);
if (m == -1)
return ((TIFF*)0);
This code line is because some (old) versions of MinGW does not support wide chars in file opening interface.
If that has changed, this line should be changed too (with compiler/runtime/whatever version check).
We're open to contributions, so if you could investigate the problem in depth (what versions/runtimes works with wchar_t and what is not), just propose patch.
Alternatively, you may create own Libraw datastream implementation and pass it to LibRaw.
Almost every camera employs anti-aliasing; that'd seem to =>approximately<= result in DNGs with one-quarter the effective pixel count, and color artifacts possibly(!) eliminated or greatly reduced—since that is of course the REASON for AA.
Gaussian blur should give a weak approximation of AA. For example, light falling on the top-left (Red) pixel wouldn't be shared with its upper and leftwards neighbors under AA; it'd all go to green & blue pixels that're right and down (in the common arrangement) from the Red.
But Gaussian blur would send a fraction of its light to all 8 Green + Blue neighbors, while reducing the light attributed to the original Red pixel. In that sense, there could be excessive fuzzing of the artificial image versus what a perfect lens + AA filter would produce.
some camera records four multipliers, while some (RGB/3 color) may use only three. In the second case, the cam_mul[1] (Green) should be copied to cam_mul[3] ('second green')
After that, cam_mul[] is usually normalized (i.e. divided to smallest value, so [1500,1000,1200,1000] will become [1.5,1,1.2,1])
And, yes, then color channels are multiplied to cam_mul[color] values.
unfortunately, we could not help. LibRaw Python bindings (rawpy) is not our product, so we completely do not know what is 'black_level_per_channel' vector (and other variables you mentioned)
Thanks for the response. It looks like FreeImage already sets that flag. But I think I was misunderstanding anyway. I thought that because (for a 12 bit image) the maximum wasn't 0xfff, but was 0xf00 which was then scaled to 0xffff that the data was being discarded. From a closer look at the source it looks like this is just because the black level (0xff) is subtracted.
Therefore it looks like the only way to "recover" the highlights is to use a method similar to libraws recover_highlights method. Unfortunately I need to do this after the data is converted to rgb, and if I understand correctly the recover_highlights method works on all four channels (though I'm not really sure what the variables kc and hsat are, and what the difference between kc and c is, if you could explain this I'd be thankful). Are you aware of any methods that work on 3 channel rgb data?
There is very similar report related to digiKam: https://github.com/LibRaw/LibRaw/issues/186
Quote from my last reply in that thread:
1st: the backtraces points to LibRaw::LibRaw (constructor), so the problem is not related to any specific RAW file
2nd: I've tried XCode 6, XCode 8, XCode 9 builds (using make -f Makefile.dist) and was unable to reproduce the problem using both dcraw_emu (single thread) and half_mt (multithreaded). Both samples works fine with DSC_1796.NEF sample (link above). The sample was multiplied into many (same) files: DSC_1796-[0-9].NEF
So, I'm still suspect that this is not LibRaw problem, but not enough stack problem. LibRaw objects are big (e.g. several 16-bit curves), so default stack size could be not enough (it is better to allocate LibRaw object dynamically to avoid that).
Is there any way to see is enough stack space is present [in your app]?
OK, I got it. Thank you.
Makefile.mingw already defines -DLIBRAW_NODLL, so disabling DllDef (and, of course, DLL builds):
DllDef is defined in libraw_types.h:
#ifdef WIN32
#ifdef LIBRAW_NODLL
#define DllDef
#else
#ifdef LIBRAW_BUILDLIB
#define DllDef __declspec(dllexport)
#else
#define DllDef __declspec(dllimport)
#endif
#endif
#else
#define DllDef
#endif
Sorry I think I misunderstood.
The problem is because the error I cannot compile the lib to test it. I thought you told me by removing the flag it would be ok.
Sorry,
I can not understand the question. What feature you want to disable?
Also, I took a look to the code of libtiff that is also included in msys2:
OK thanks. Can I disable that ?
Makefile.mingw builds LibRaw with -DLIBRAW_NODLL, so DllDef is defined empty
OK I will try to investigate, but I tried to compile libraw on MSYS2 with no sucess.
I've a lot of
This code line is because some (old) versions of MinGW does not support wide chars in file opening interface.
If that has changed, this line should be changed too (with compiler/runtime/whatever version check).
We're open to contributions, so if you could investigate the problem in depth (what versions/runtimes works with wchar_t and what is not), just propose patch.
Alternatively, you may create own Libraw datastream implementation and pass it to LibRaw.
Could you please provide more detailed bracktrace to see what line in LibRaw constructor caused that?
I do not see any incorrect code in quoted line. Could you please provide some additional data (e.g. stacktrace?)
HEIC is not raw format, but processed image format (similar to JPEG)
Almost every camera employs anti-aliasing; that'd seem to =>approximately<= result in DNGs with one-quarter the effective pixel count, and color artifacts possibly(!) eliminated or greatly reduced—since that is of course the REASON for AA.
Gaussian blur should give a weak approximation of AA. For example, light falling on the top-left (Red) pixel wouldn't be shared with its upper and leftwards neighbors under AA; it'd all go to green & blue pixels that're right and down (in the common arrangement) from the Red.
But Gaussian blur would send a fraction of its light to all 8 Green + Blue neighbors, while reducing the light attributed to the original Red pixel. In that sense, there could be excessive fuzzing of the artificial image versus what a perfect lens + AA filter would produce.
(Happy to be corrected on this, btw)
some camera records four multipliers, while some (RGB/3 color) may use only three. In the second case, the cam_mul[1] (Green) should be copied to cam_mul[3] ('second green')
After that, cam_mul[] is usually normalized (i.e. divided to smallest value, so [1500,1000,1200,1000] will become [1.5,1,1.2,1])
And, yes, then color channels are multiplied to cam_mul[color] values.
Dear Sir,
unfortunately, we could not help. LibRaw Python bindings (rawpy) is not our product, so we completely do not know what is 'black_level_per_channel' vector (and other variables you mentioned)
LibRaw does not saves RAW files (inc. DNG), consider Adobe DNG SDK for that.
Here is the patch for 2000D/4000D: https://www.dropbox.com/s/erxiu7f63c4qao9/2000d.patch?dl=0
clipping is necessary to avoid 'pink clouds' problem
Thanks for the response. It looks like FreeImage already sets that flag. But I think I was misunderstanding anyway. I thought that because (for a 12 bit image) the maximum wasn't 0xfff, but was 0xf00 which was then scaled to 0xffff that the data was being discarded. From a closer look at the source it looks like this is just because the black level (0xff) is subtracted.
Therefore it looks like the only way to "recover" the highlights is to use a method similar to libraws recover_highlights method. Unfortunately I need to do this after the data is converted to rgb, and if I understand correctly the recover_highlights method works on all four channels (though I'm not really sure what the variables kc and hsat are, and what the difference between kc and c is, if you could explain this I'd be thankful). Are you aware of any methods that work on 3 channel rgb data?
Looks like libraw is not added to linker input
Thank you for the report, will inspect in depth
Some EXIF/Makernotes fields are parsed in LibRaw, some are not.
Exposure time is available via imgdata.other.shutter
imgdata.params.no_auto_bright=1 will, most likely, solve the problem.
Not sure it is possible to set via FreeImage
Most recent Sony cameras uses 'compressed ARW2.3 format' explained in this article: https://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection (scroll to 'Inside sony cRAW format' section)
Yes, 0-16xxx range is correct.
Pages