libraw do not contain any color->BW conversion code.
So, you may choose from two options
* convert interpolated (by libraw) rgb data to BW using your preferred formula
* or implement own raw to BW postprocessing instead of LibRaw::dcraw_process()
If you're processing data from 'converted' camera (initially color, but CFA removed, such as maxmax.com conversion), you may try to set imgdata.idata.colors to 1 after unpack() phase
In first assumption, you're right: there is ADC with fixed bit count, so data range should fit into this range.
But things are more complicated:
1. Some cameras use full pixel capacity at base (lowest) ISO, so data maximum is lower.
See this article: http://www.rawdigger.com/howtouse/rawdigger-histograms-overexposure-shapes
and inspect Panasonic histograms at low iso
(You may also find RawDigger software very useable for your work, to see raw data 'as is')
2. Some cameras alter RAW data in some way (Sony lossy compression mentioned above and many other formats with highlights compression tone curve), so data range is not same as ADC range
2b. Some cameras may subtract 'black level' (bias) before raw values recording, thus resulting in lower maximum values.
2c. Some cameras may clip data below ADC maximum value to avoid ADC non-linearity (many Canon cameras).
hi, Alex
thanks for reply. i am a ISP algorithm engineer. i use libraw for read dng and cr2 files for algorithm tunning.
usually sensor will output raw data in some bit width, such as 10bit, 12bit 14bit in mipi interface. which bit information will be used in my c-model.
as you mentioned, Libraw provides maximum value in imgdata.color.maximum. for 14bit sensor output, i see the maximum value will be 0x3f60, which is less than 0x3fff. so may i need write some codes to detect the highest non-zero bit so that i can know the real bits (14 in this case)in 16 bit data.
anyway, thanks for you reply. libraw is really helpful for me. thanks verymuch
When I run the program through command prompt (*.exe located in the bin folder) it seems to work fine, I can for example load my Raw file and get 4 seperate tiff files using 4channels.exe
The target .exe does exist, I had to change a few properties in the linker so Visual Studio was able to find it. Now when I debug the dcraw_emu.cpp a blank window briefly flashes and I get the following message:
"The program '[10976] dcraw_emu.exe' has exited with code 1 (0x1)."
In Visual Studio Solution Explorer:
- right click on project you want to run on debug (dcraw_emu, etc)
- select 'Set as startup project' in pull-down menu
When calling dcraw_make_mem_image (or even copy_mem_image) the libraw_processed_image_t.data order is [R (0), G (0), B (0), R (1), G (1), B (1)] where (n) is the nth pixel, right? Does this change when the output is 16 bits since a char may only be 8 bits on some platforms? If so, what does the order look like?
I tried mem-image.cpp (from terminal writing to ppm) and it worked, which means it probably is an issue when converting to Java. Byte is signed because all java data types are signed however, when casting to an int you can get the unsigned value by using & 0xFF on the byte. If I do this, or just use an int[] instead of a byte[], I lose the yellow color (new image).
Your image sample is not bayer patten, but something strange (in bayer pattern you'll see image, may be with wrong colors or reduced brigtness, but image subject will be visible).
Most likely, this is signed/unsigned conversion problem, or wrong row stride.
Use samples/mem-image.cpp as an example of dcraw_make_mem_image() call, make sure this example can process your DNG, than modify this code for your needs.
public static Bitmap createFromColorChannels(byte[] channels, int width, int height) {
int[] color = new int[channels.length / 3];
for (int i = 0; i < color.length; i++) {
color[i] = Color.rgb(channels[i], channels[i + 1], channels[i + 2]);
}
...
Also, your patch will read additional byte via f = getc(ifp);
I do not see any attempt to add this byte back to the input stream. Most likely, your code does not decode correct data right.
Thank you!!
So, libraw doesn't perform -d Document Mode (no color, no interpolation) like dcraw.
Thanks!
libraw do not contain any color->BW conversion code.
So, you may choose from two options
* convert interpolated (by libraw) rgb data to BW using your preferred formula
* or implement own raw to BW postprocessing instead of LibRaw::dcraw_process()
If you're processing data from 'converted' camera (initially color, but CFA removed, such as maxmax.com conversion), you may try to set imgdata.idata.colors to 1 after unpack() phase
For DNG files, linearization curve is applied during unpack(), but no other processing
In first assumption, you're right: there is ADC with fixed bit count, so data range should fit into this range.
But things are more complicated:
1. Some cameras use full pixel capacity at base (lowest) ISO, so data maximum is lower.
See this article: http://www.rawdigger.com/howtouse/rawdigger-histograms-overexposure-shapes
and inspect Panasonic histograms at low iso
(You may also find RawDigger software very useable for your work, to see raw data 'as is')
2. Some cameras alter RAW data in some way (Sony lossy compression mentioned above and many other formats with highlights compression tone curve), so data range is not same as ADC range
2b. Some cameras may subtract 'black level' (bias) before raw values recording, thus resulting in lower maximum values.
2c. Some cameras may clip data below ADC maximum value to avoid ADC non-linearity (many Canon cameras).
Using wrong maximum value in processing will result into false colored highlights: http://blog.lexa.ru/2010/03/28/taina_rozovykh_oblakov.html (sorry, it is in russian, but google translate will do the trick).
hi, Alex
thanks for reply. i am a ISP algorithm engineer. i use libraw for read dng and cr2 files for algorithm tunning.
usually sensor will output raw data in some bit width, such as 10bit, 12bit 14bit in mipi interface. which bit information will be used in my c-model.
as you mentioned, Libraw provides maximum value in imgdata.color.maximum. for 14bit sensor output, i see the maximum value will be 0x3f60, which is less than 0x3fff. so may i need write some codes to detect the highest non-zero bit so that i can know the real bits (14 in this case)in 16 bit data.
anyway, thanks for you reply. libraw is really helpful for me. thanks verymuch
When I run the program through command prompt (*.exe located in the bin folder) it seems to work fine, I can for example load my Raw file and get 4 seperate tiff files using 4channels.exe
Is the program run outside of debugger?
(use CMD window to run it to see 'command line help' output if you run it without raw file on command line)
I've added the breakpoint at the beginning of main() inside dcraw_emu.cpp but I get the same result as before.
try to add breakpoint at beginning of main()
The target .exe does exist, I had to change a few properties in the linker so Visual Studio was able to find it. Now when I debug the dcraw_emu.cpp a blank window briefly flashes and I get the following message:
"The program '[10976] dcraw_emu.exe' has exited with code 1 (0x1)."
Please make sure that target .exe is really exists after build stage.
Yes, that's what I've been trying to do previously but I receive an error that a specified file cannot be found, here's a 30s video showing the error:
https://youtu.be/_zGPEnCnJxc
I really appreciate your help
In Visual Studio Solution Explorer:
- right click on project you want to run on debug (dcraw_emu, etc)
- select 'Set as startup project' in pull-down menu
Yes I understand, but how would I go about running the examples (i.e. 4channels.exe) in Visual Studio after installing the library?
DLL file is a (shared) library, it cannot be 'started'
There is no such things as a 'bit depth':
Imagine Sony ARW2 format (described in details here: http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection ):
- local 7-bit lossy storage
- 11-bit non-linear after lossy (de)compression
- 0..17204 data range after linearization curve (so, 'more than 14bits'
Also, advertized as 14-bit ADC (Sony A7 series), but data gapped even in shadows).
So, what bitdepth?
LibRaw provides data range (maximum value) in imgdata.color.maximum
copy_mem_image() is called from dcraw_make_mem_image() with bgr parameter set to 0.
So, the order is always RGB
When calling dcraw_make_mem_image (or even copy_mem_image) the libraw_processed_image_t.data order is [R (0), G (0), B (0), R (1), G (1), B (1)] where (n) is the nth pixel, right? Does this change when the output is 16 bits since a char may only be 8 bits on some platforms? If so, what does the order look like?
I tried mem-image.cpp (from terminal writing to ppm) and it worked, which means it probably is an issue when converting to Java. Byte is signed because all java data types are signed however, when casting to an int you can get the unsigned value by using & 0xFF on the byte. If I do this, or just use an int[] instead of a byte[], I lose the yellow color (new image).
BTW, is byte type signed or unsigned?
Your image sample is not bayer patten, but something strange (in bayer pattern you'll see image, may be with wrong colors or reduced brigtness, but image subject will be visible).
Most likely, this is signed/unsigned conversion problem, or wrong row stride.
Use samples/mem-image.cpp as an example of dcraw_make_mem_image() call, make sure this example can process your DNG, than modify this code for your needs.
The following code creates 32 bit ARGB:
dcraw_make_mem_image() creates 24-bit RGB, not 32-bit ARGB
Ok.
The FreeImage source code contains the old version without the fix (0.17.a1). I will include only the fix in our code.
Thanks.
Also, your patch will read additional byte via f = getc(ifp);
I do not see any attempt to add this byte back to the input stream. Most likely, your code does not decode correct data right.
Actual LibRaw already contains this check and additional loop count check
Pages