I've noticed that LibRaw decodes 6022 x 4020 Pixels from the sensors used in Canons EOS 80D, 200D, 77D etc.,
while DNG-Converter (and original DCRAW) decode 6024 x 4022 Pixels.
Can this be changed?
I ask because calibration images (darks, flats, bias, bad-pixel maps) for astro imaging cannot be used cross wise.
Some interpolation methods are based on proper lightness calculation. For these methods, demosaic w/o prior white balance will provide wrong (and visually very bad) results.
Also, data scaling in desirable to lower rounding errors.
Anyway, you may create your own dcraw_process() call without steps that are unnecessary to you. In LibRaw's dcraw_process() there is no way to skip pre-interpolations steps (scale_colors, pre_interpolate) /and after-interpolation convert_to_rgb() is also always executed).
Alternatively, you may fix all processing params via user_* variables.
I looked at the GitHub page for sources, it looks quite messy.
I struggle to put confidence in projects that have obsolete and current file merged together, and many files for nothing. Lack of consistent naming and structure even?
Also the doc made on a forum ?
You should use Doxygen, that would look way more serious.
Wait a little. We'll publish next snapshot soon (still waiting for Nikon Z6 samples: all crops/all compression methods, to make sure we support this camera in full)
I was just wondering if you'd had any success with this? I'm considering my options for creating multiple full-resolution, non-bayer, non-interpolation, monochrome/grayscale camera for a Raspberry Pi. I could try and remove the bayer filters from all of them, but this is trick and could easily cause damage. I'd much rather process the Raw files from the cameras (v2 8MP) to create monochrome, but with the red, green and blue scaled correctly...
LibRaw provides LibRaw::open_bayer() call for such tasks.
Unfotrunately, the only place where it is documented is Changelog.txt file, it is missing from docs (to be fixed).
Also, look into samples\openbayer_example.cpp
If the camera records raw in linear space, there must be a way to know it during "unpack()" stage, correct? is it indicated somewhere in the meta data of the RAW file?
Just wondering if some camera actually record raw data in linear space? If that is true, can “libraw” tell us if the raw data is already in linear space?
White balance is, in most cases, applied to linear data, so linearization is done before white balance. Traditional approach to demosaicking is also to apply it to linear data, so linearization quite often precedes demosaicking. That makes linearization the first step of raw processing.
first: "So when you say daylight colour profile, its a preset that comes with libraw vs what the camera thinks it should be?"
Second, if there is no way to correctly white balance to 6500k, then why do we even have these numbers to begin with?
There are other reasons for wanting to get a correct d6500 white balance than matching a physical lamp. For example if i have to work with raw files from several cameras: then if they could all reliably be set to a 6500k white balance, they would at least all match eachother regardless of the actual lamp temperature, saving one self the work of having to grade each camera individually.
Also look at the rawtoaces project. it is specifically made to convert raw files to aces colourspace which is calibrated around a 6000k (?) whitepoint. Unfortunately not all cameras have had their sensors analysed for spectral sensitivity.
Your question is 'is there any reason the derived values should be more correct....'
My question is: 'more correct for WHAT?'
Real scene is lit by some real (daylight) light source, not (imaginary/synthetic) black body at 6500K.
Both settings are 'not correct' for real image/real scene.
Hi,
I've noticed that LibRaw decodes 6022 x 4020 Pixels from the sensors used in Canons EOS 80D, 200D, 77D etc.,
while DNG-Converter (and original DCRAW) decode 6024 x 4022 Pixels.
Can this be changed?
I ask because calibration images (darks, flats, bias, bad-pixel maps) for astro imaging cannot be used cross wise.
Greetings, Erwin
Thank you very much!
New LibRaw-201812 snapshot just published at https://github.com/LibRaw/LibRaw
This version supports 2000D/4000D (and other fresh cameras)
Some interpolation methods are based on proper lightness calculation. For these methods, demosaic w/o prior white balance will provide wrong (and visually very bad) results.
Also, data scaling in desirable to lower rounding errors.
Anyway, you may create your own dcraw_process() call without steps that are unnecessary to you. In LibRaw's dcraw_process() there is no way to skip pre-interpolations steps (scale_colors, pre_interpolate) /and after-interpolation convert_to_rgb() is also always executed).
Alternatively, you may fix all processing params via user_* variables.
You may use alternate libraries.
I looked at the GitHub page for sources, it looks quite messy.
I struggle to put confidence in projects that have obsolete and current file merged together, and many files for nothing. Lack of consistent naming and structure even?
Also the doc made on a forum ?
You should use Doxygen, that would look way more serious.
Thanks for the concept and efforts though
Wait a little. We'll publish next snapshot soon (still waiting for Nikon Z6 samples: all crops/all compression methods, to make sure we support this camera in full)
I need the patch too. Link is not working for me aswell...
Hi!
How can we help? I can provide images with multiple settings, if that is useful.
If you have other ideas, let me know!
Thanks!
Jose
Hi all,
I need the patch for 2000D but the link doesnt work anymore.
Can you please repost the patch.
Thanks
Thanks developer
Imagine scene, contains red and blue surfaces (patches), shot via camera with color (bayer) filters.
Red/Blue (pixel values) ratio on red will be much higher than R/B ratio on blue patch. This is how a color camera works.
There is no way to fix it via single (per channel) gains, R/B(red) always will be higher than R/B(blue), regardless of gain used.
So, demosaic (i.e. recovering of missing color values), than RGB -> grayscale conversion looks like the only way to go.
I was just wondering if you'd had any success with this? I'm considering my options for creating multiple full-resolution, non-bayer, non-interpolation, monochrome/grayscale camera for a Raspberry Pi. I could try and remove the bayer filters from all of them, but this is trick and could easily cause damage. I'd much rather process the Raw files from the cameras (v2 8MP) to create monochrome, but with the red, green and blue scaled correctly...
google search with site:libraw.org works well enough to not maintain own search engine :)
LibRaw provides LibRaw::open_bayer() call for such tasks.
Unfotrunately, the only place where it is documented is Changelog.txt file, it is missing from docs (to be fixed).
Also, look into samples\openbayer_example.cpp
There are a lot of different RAW formats (data formats, metadata formats, metadata values), it's pretty hard to discuss them all at once.
In general, you may assume that unpack() provides linear data/black level not subtracted.
Thanks, Alex!
If the camera records raw in linear space, there must be a way to know it during "unpack()" stage, correct? is it indicated somewhere in the meta data of the RAW file?
LibRaw applies (some) linearization data on unpack() phase.
Thanks a lot!
Just wondering if some camera actually record raw data in linear space? If that is true, can “libraw” tell us if the raw data is already in linear space?
White balance is, in most cases, applied to linear data, so linearization is done before white balance. Traditional approach to demosaicking is also to apply it to linear data, so linearization quite often precedes demosaicking. That makes linearization the first step of raw processing.
Hi Alex,
Just wondering where the "Linearization" stands during "raw processing"?
Thanks a lot,
Mio
As soon as we'll able to decode it.
Any help is highly appreciated.
LibRaw allows to set any white balance, including the ones that comes from camera metadata (use imgdata.params.user_mul[] for that).
the question was a two-parter.
first: "So when you say daylight colour profile, its a preset that comes with libraw vs what the camera thinks it should be?"
Second, if there is no way to correctly white balance to 6500k, then why do we even have these numbers to begin with?
There are other reasons for wanting to get a correct d6500 white balance than matching a physical lamp. For example if i have to work with raw files from several cameras: then if they could all reliably be set to a 6500k white balance, they would at least all match eachother regardless of the actual lamp temperature, saving one self the work of having to grade each camera individually.
Also look at the rawtoaces project. it is specifically made to convert raw files to aces colourspace which is calibrated around a 6000k (?) whitepoint. Unfortunately not all cameras have had their sensors analysed for spectral sensitivity.
Your question is 'is there any reason the derived values should be more correct....'
My question is: 'more correct for WHAT?'
Real scene is lit by some real (daylight) light source, not (imaginary/synthetic) black body at 6500K.
Both settings are 'not correct' for real image/real scene.
Pages