Windows compilers are not limited to MSVC/MinGW (Intel, clang /very different flavours on Windows/, PGI, Wacom....).
Although most compilers *should* mimics to some of mainstream ones, I'm not sure they really does it (right way and in all versions)
So, WIN32 is defined manually while building LibRaw to indicate that Win32-like environment is present.
We're open to patches: if someone wants to accurately replace #ifdef WIN32 to something better (to work automatically with most common compilers), we'll happily use this patch (too late for 0.19, but in future version)
Thank you for the explanation. I'm aware of the difference between WIN32 and _WIN32, but this use seems a bit irritating. Why not using _MSC_VER or some _MINGW macro to identify the compiler?
Yes, you need it in app too.
WIN32 does not means 'not 64 bit'.
Please note, that WIN32 and _WIN32 are not the same. _WIN32 is defined in both MS and MinGW/Cygwin compilers. BTW (old versions of) MinGW does not support unicode filenames. So, to use open_wfile() call (needed if you wish to support non-native-locale filenames) you need to define WIN32 macro.
1. We suggest that every software developer decides for himself how to proceed ;)
2. Most probably.
3. LibRaw supports more cameras than Adobe do in their public releases. For the cameras that are supported by Adobe we mostly use Adobe matrices. Normally we add comments when we don't.
The "off" look depends a lot on how one maps values to RGB working space. If simple clipping (especially to sRGB) is used and the tone curve is "simple", things may be indeed off.
On a side note, LibRaw is not meant to be a full-fledged raw converter, the colour rendering is there just for the purpose of having a preview.
Ah, got it. Thanks for getting back to me. I've got a few follow-up questions:
1. Is there a list of cameras for which the matrices are known to be not particularly good so that we can work on improving them?
2. Would you accept a PR to update the matrix for the 6D to the values above?
3. When you say "standard data" -- what does that mean? Based on the comments in the code, my understanding was that the matrices were mostly extracted from DNGs, however it seems that the 6D values came from somewhere else?
For what it's worth, this particular matrix was pretty far off (it was mapping reds to a weird shade of almost orange); I don't know about the general case, but I do think it would be worthwhile to fix this matrix in particular.
No, cam_xyz is taken from the standard data, ColorMatrix2 is taken from the particular dng file.
At the time we put the standard data into adobe_coeff under "Canon EOS 6D" the matrix that Adobe had in their profile was not particularly good. You can replace the matrix with the one you prefer and recompile.
Scroll to line 77, and you'll find a C++ method that contains the code to turn 16-bit RGB data into RGB floating point 0.0-1.0. It's the loop within the conditional 'colors==3'. To meet your requirements, the 16-bit input data should not be scaled.
Iliah,
I did notice that Nikon hacker claims to have a hack for the D5100 and D7000. I also e-mailed Nikon to ask if an "astronomy firmware" update could be made available for the D5300. Nikon has not responded yet, which is a shame because Nikon Hacker gives me the impression that it is possible to do.
Thanks once again for clarifying...it's nice to know that someone is continuing to support the efforts begun by Dave Coffin.
Sorry for being unclear. Yes, the raw data in the NEF file is clamped at 588. There are no values below, and no way of recovering the missing values from the data - they are not there. As far as I know hacks to obtain true dark current and to disable star eater are available for D5100 and D7000, but you can reach out to nikon hacker community for more up-to-date info on the subject.
Yes...I understand the precautions for taking DARKs. I also add a computer fan blowing air on the rear of the camera to limit the heat buildup in the camera.
Could you please upload the dark frame from your Nikon for analysis and send us the link to info@libraw.org ?
On a side note, it is not always sufficient to cover the lens, for dSLRs I cover the viewfinder too. Covering the whole camera may result in increased heat buildup. I also remove batteries and use an external power source.
I agree that the black level for a Nikon NEF file is 600. For a 14b Canon CR2 file this black level is 2048. When I look at a Canon ISO 1600 300 second exposure DARK frame (lens cap installed blacking all light) I notice that there are pixel values much lower than 2048. If you view the Canon data as a histogram you see a full bell curve shaped histogram centered at roughly 2048. The values extend downwards to 1800 or more.
For a similar Nikon NEF DARK frame the histogram is centered at 600 but the entire left hand side of the histogram is "missing". These values have all been clamped at a 588 minimum. I need access to the full histogram of values so that I can average multiple DARK frames to reduce the random noise and isolate the DARK current signal. I can do this quite effectively with a Canon camera. Doing this with a Nikon camera is less effective simply because of the huge number of values clamped at the 588 minimum.
When I use DCRAW to extract the RAW values from my D5300 NEF files I notice that the minimum value for all pixels is clamped at 588. Does the NEF format store values below this 588 clamp? Is DCRAW simply clamping the values at 588 based upon some NEF documentation? I am an astrophotographer and this 588 clamp interferes with generating master BIAS and DARK frames.
Thanks - this turned out to be a problem of my own making. I'll pass it along in case anyone runs into the same issue.
I'm developing under Windows.
I built the libraw.dll library using the VC++ project included with the distribution.
The software that calls libraw.dll is developed using a different compiler. There appear to be some object size and alignment differences between them. As such, things in libraw_data_t weren't where the calling program expected them to be. Setting use_camera_wb from the calling program actually wrote the int to the wrong place.
I worked around this by writing some additional get and set functions so the data in a libraw_data_t need never be accessed directly by the calling program.
Thanks again for your help, and for a great library.
LIBRAW_OPENMP define should be defined only if you use OpenMP builds.
Fuji compressed decoder contains critical section. This section is implemented via
#pragma omp critical // --> if LIBRAW_OPENMP defined
and via own locking mechanics if OpenMP is not used.
Windows compilers are not limited to MSVC/MinGW (Intel, clang /very different flavours on Windows/, PGI, Wacom....).
Although most compilers *should* mimics to some of mainstream ones, I'm not sure they really does it (right way and in all versions)
So, WIN32 is defined manually while building LibRaw to indicate that Win32-like environment is present.
We're open to patches: if someone wants to accurately replace #ifdef WIN32 to something better (to work automatically with most common compilers), we'll happily use this patch (too late for 0.19, but in future version)
Thank you for the explanation. I'm aware of the difference between WIN32 and _WIN32, but this use seems a bit irritating. Why not using _MSC_VER or some _MINGW macro to identify the compiler?
AFAIK, Delphi developers use LibRaw C-API (do not know it needs additional wrapper level or not)
Yes, you need it in app too.
WIN32 does not means 'not 64 bit'.
Please note, that WIN32 and _WIN32 are not the same. _WIN32 is defined in both MS and MinGW/Cygwin compilers. BTW (old versions of) MinGW does not support unicode filenames. So, to use open_wfile() call (needed if you wish to support non-native-locale filenames) you need to define WIN32 macro.
Does this mean that applications should also #define WIN32 before including libraw.h? At least for x64 builds this might not be the case by default.
Erwin
LibRaw's Makefile.mscv also defines WIN32:
COPT=/EHsc /MP /MD /I. /DWIN32 /O2....
(while Makefile.mingw is not)
This is the way we distinguish native win32/VisualStudio build environment from emulated linux-like one (with time.h and other posix calls)
Fantastic! Thank you!
We decided to sync our color data with Adobe for Canon 6D: https://github.com/LibRaw/LibRaw/commit/4b76e9b385a062fdb60be8240b6ecaf3...
Dear Sir:
1. We suggest that every software developer decides for himself how to proceed ;)
2. Most probably.
3. LibRaw supports more cameras than Adobe do in their public releases. For the cameras that are supported by Adobe we mostly use Adobe matrices. Normally we add comments when we don't.
The "off" look depends a lot on how one maps values to RGB working space. If simple clipping (especially to sRGB) is used and the tone curve is "simple", things may be indeed off.
On a side note, LibRaw is not meant to be a full-fledged raw converter, the colour rendering is there just for the purpose of having a preview.
Ah, got it. Thanks for getting back to me. I've got a few follow-up questions:
1. Is there a list of cameras for which the matrices are known to be not particularly good so that we can work on improving them?
2. Would you accept a PR to update the matrix for the 6D to the values above?
3. When you say "standard data" -- what does that mean? Based on the comments in the code, my understanding was that the matrices were mostly extracted from DNGs, however it seems that the 6D values came from somewhere else?
For what it's worth, this particular matrix was pretty far off (it was mapping reds to a weird shade of almost orange); I don't know about the general case, but I do think it would be worthwhile to fix this matrix in particular.
Thanks again,
James.
Dear Sir:
> cam_xyz should exactly match ColorMatrix2
No, cam_xyz is taken from the standard data, ColorMatrix2 is taken from the particular dng file.
At the time we put the standard data into adobe_coeff under "Canon EOS 6D" the matrix that Adobe had in their profile was not particularly good. You can replace the matrix with the one you prefer and recompile.
Sorry, no search. Use google with site:libraw.org specifier
Go here:
https://github.com/butcherg/rawproc/blob/master/gimage.cpp
Scroll to line 77, and you'll find a C++ method that contains the code to turn 16-bit RGB data into RGB floating point 0.0-1.0. It's the loop within the conditional 'colors==3'. To meet your requirements, the 16-bit input data should not be scaled.
Iliah,
I did notice that Nikon hacker claims to have a hack for the D5100 and D7000. I also e-mailed Nikon to ask if an "astronomy firmware" update could be made available for the D5300. Nikon has not responded yet, which is a shame because Nikon Hacker gives me the impression that it is possible to do.
Thanks once again for clarifying...it's nice to know that someone is continuing to support the efforts begun by Dave Coffin.
Peter
Dear Sir:
Sorry for being unclear. Yes, the raw data in the NEF file is clamped at 588. There are no values below, and no way of recovering the missing values from the data - they are not there. As far as I know hacks to obtain true dark current and to disable star eater are available for D5100 and D7000, but you can reach out to nikon hacker community for more up-to-date info on the subject.
Iliah,
Are you saying that the RAW data in the file is clamped at 588 and that there is no possibility of recovering the unclamped values?
Peter
Dear Sir:
I checked the file, black is indeed at 588, clamping is evident.
Yes...I understand the precautions for taking DARKs. I also add a computer fan blowing air on the rear of the camera to limit the heat buildup in the camera.
Dear Sir:
Could you please upload the dark frame from your Nikon for analysis and send us the link to info@libraw.org ?
On a side note, it is not always sufficient to cover the lens, for dSLRs I cover the viewfinder too. Covering the whole camera may result in increased heat buildup. I also remove batteries and use an external power source.
I agree that the black level for a Nikon NEF file is 600. For a 14b Canon CR2 file this black level is 2048. When I look at a Canon ISO 1600 300 second exposure DARK frame (lens cap installed blacking all light) I notice that there are pixel values much lower than 2048. If you view the Canon data as a histogram you see a full bell curve shaped histogram centered at roughly 2048. The values extend downwards to 1800 or more.
For a similar Nikon NEF DARK frame the histogram is centered at 600 but the entire left hand side of the histogram is "missing". These values have all been clamped at a 588 minimum. I need access to the full histogram of values so that I can average multiple DARK frames to reduce the random noise and isolate the DARK current signal. I can do this quite effectively with a Canon camera. Doing this with a Nikon camera is less effective simply because of the huge number of values clamped at the 588 minimum.
Nikon D5300 black level value seems to be 600
When I use DCRAW to extract the RAW values from my D5300 NEF files I notice that the minimum value for all pixels is clamped at 588. Does the NEF format store values below this 588 clamp? Is DCRAW simply clamping the values at 588 based upon some NEF documentation? I am an astrophotographer and this 588 clamp interferes with generating master BIAS and DARK frames.
Yes, it is better to use same compiler flags for both app and library :)
Also, your C-API getter/setter patches are welcome
Thanks - this turned out to be a problem of my own making. I'll pass it along in case anyone runs into the same issue.
I'm developing under Windows.
I built the libraw.dll library using the VC++ project included with the distribution.
The software that calls libraw.dll is developed using a different compiler. There appear to be some object size and alignment differences between them. As such, things in libraw_data_t weren't where the calling program expected them to be. Setting use_camera_wb from the calling program actually wrote the int to the wrong place.
I worked around this by writing some additional get and set functions so the data in a libraw_data_t need never be accessed directly by the calling program.
Thanks again for your help, and for a great library.
::Jack
Pages