Note: same applies to the wf_debanding option
I have RAWs available from the 80D and the 200D:
80D, LibRaw: 6022 x 4020
80D, DCRAW: 6024 x 4022, 2 more cols on the left side, 2 extra rows at the top
80D, DNG-Converter: Same as DCRAW
200D, LibRaw: 6022 x 4020
200D, DCRAW: 6024 x 4020, 2 more cols on the left side of the image
200D, DNG-Converter: Same as DCRAW
Strange that the sizes are equal in DNG-Converter and DCRAW, but different for the two cameras.
Could you please specify what need to be fixed: left/top margin(s), or visible area only?
I've noticed that LibRaw decodes 6022 x 4020 Pixels from the sensors used in Canons EOS 80D, 200D, 77D etc.,
while DNG-Converter (and original DCRAW) decode 6024 x 4022 Pixels.
Can this be changed?
I ask because calibration images (darks, flats, bias, bad-pixel maps) for astro imaging cannot be used cross wise.
New LibRaw-201812 snapshot just published at https://github.com/LibRaw/LibRaw
This version supports 2000D/4000D (and other fresh cameras)
Some interpolation methods are based on proper lightness calculation. For these methods, demosaic w/o prior white balance will provide wrong (and visually very bad) results.
Also, data scaling in desirable to lower rounding errors.
Anyway, you may create your own dcraw_process() call without steps that are unnecessary to you. In LibRaw's dcraw_process() there is no way to skip pre-interpolations steps (scale_colors, pre_interpolate) /and after-interpolation convert_to_rgb() is also always executed).
Alternatively, you may fix all processing params via user_* variables.
I looked at the GitHub page for sources, it looks quite messy.
I struggle to put confidence in projects that have obsolete and current file merged together, and many files for nothing. Lack of consistent naming and structure even?
Also the doc made on a forum ?
You should use Doxygen, that would look way more serious.
Thanks for the concept and efforts though
Wait a little. We'll publish next snapshot soon (still waiting for Nikon Z6 samples: all crops/all compression methods, to make sure we support this camera in full)
I need the patch too. Link is not working for me aswell...
How can we help? I can provide images with multiple settings, if that is useful.
If you have other ideas, let me know!
I need the patch for 2000D but the link doesnt work anymore.
Can you please repost the patch.
Imagine scene, contains red and blue surfaces (patches), shot via camera with color (bayer) filters.
Red/Blue (pixel values) ratio on red will be much higher than R/B ratio on blue patch. This is how a color camera works.
There is no way to fix it via single (per channel) gains, R/B(red) always will be higher than R/B(blue), regardless of gain used.
So, demosaic (i.e. recovering of missing color values), than RGB -> grayscale conversion looks like the only way to go.
I was just wondering if you'd had any success with this? I'm considering my options for creating multiple full-resolution, non-bayer, non-interpolation, monochrome/grayscale camera for a Raspberry Pi. I could try and remove the bayer filters from all of them, but this is trick and could easily cause damage. I'd much rather process the Raw files from the cameras (v2 8MP) to create monochrome, but with the red, green and blue scaled correctly...
google search with site:libraw.org works well enough to not maintain own search engine :)
LibRaw provides LibRaw::open_bayer() call for such tasks.
Unfotrunately, the only place where it is documented is Changelog.txt file, it is missing from docs (to be fixed).
Also, look into samples\openbayer_example.cpp
There are a lot of different RAW formats (data formats, metadata formats, metadata values), it's pretty hard to discuss them all at once.
In general, you may assume that unpack() provides linear data/black level not subtracted.
If the camera records raw in linear space, there must be a way to know it during "unpack()" stage, correct? is it indicated somewhere in the meta data of the RAW file?
LibRaw applies (some) linearization data on unpack() phase.
Thanks a lot!
Just wondering if some camera actually record raw data in linear space? If that is true, can “libraw” tell us if the raw data is already in linear space?
White balance is, in most cases, applied to linear data, so linearization is done before white balance. Traditional approach to demosaicking is also to apply it to linear data, so linearization quite often precedes demosaicking. That makes linearization the first step of raw processing.
Just wondering where the "Linearization" stands during "raw processing"?
Thanks a lot,