I looked at the GitHub page for sources, it looks quite messy.
I struggle to put confidence in projects that have obsolete and current file merged together, and many files for nothing. Lack of consistent naming and structure even?
Also the doc made on a forum ?
You should use Doxygen, that would look way more serious.
Thanks for the concept and efforts though
Wait a little. We'll publish next snapshot soon (still waiting for Nikon Z6 samples: all crops/all compression methods, to make sure we support this camera in full)
I need the patch too. Link is not working for me aswell...
How can we help? I can provide images with multiple settings, if that is useful.
If you have other ideas, let me know!
I need the patch for 2000D but the link doesnt work anymore.
Can you please repost the patch.
Imagine scene, contains red and blue surfaces (patches), shot via camera with color (bayer) filters.
Red/Blue (pixel values) ratio on red will be much higher than R/B ratio on blue patch. This is how a color camera works.
There is no way to fix it via single (per channel) gains, R/B(red) always will be higher than R/B(blue), regardless of gain used.
So, demosaic (i.e. recovering of missing color values), than RGB -> grayscale conversion looks like the only way to go.
I was just wondering if you'd had any success with this? I'm considering my options for creating multiple full-resolution, non-bayer, non-interpolation, monochrome/grayscale camera for a Raspberry Pi. I could try and remove the bayer filters from all of them, but this is trick and could easily cause damage. I'd much rather process the Raw files from the cameras (v2 8MP) to create monochrome, but with the red, green and blue scaled correctly...
google search with site:libraw.org works well enough to not maintain own search engine :)
LibRaw provides LibRaw::open_bayer() call for such tasks.
Unfotrunately, the only place where it is documented is Changelog.txt file, it is missing from docs (to be fixed).
Also, look into samples\openbayer_example.cpp
There are a lot of different RAW formats (data formats, metadata formats, metadata values), it's pretty hard to discuss them all at once.
In general, you may assume that unpack() provides linear data/black level not subtracted.
If the camera records raw in linear space, there must be a way to know it during "unpack()" stage, correct? is it indicated somewhere in the meta data of the RAW file?
LibRaw applies (some) linearization data on unpack() phase.
Thanks a lot!
Just wondering if some camera actually record raw data in linear space? If that is true, can “libraw” tell us if the raw data is already in linear space?
White balance is, in most cases, applied to linear data, so linearization is done before white balance. Traditional approach to demosaicking is also to apply it to linear data, so linearization quite often precedes demosaicking. That makes linearization the first step of raw processing.
Just wondering where the "Linearization" stands during "raw processing"?
Thanks a lot,
As soon as we'll able to decode it.
Any help is highly appreciated.
LibRaw allows to set any white balance, including the ones that comes from camera metadata (use imgdata.params.user_mul for that).
the question was a two-parter.
first: "So when you say daylight colour profile, its a preset that comes with libraw vs what the camera thinks it should be?"
Second, if there is no way to correctly white balance to 6500k, then why do we even have these numbers to begin with?
There are other reasons for wanting to get a correct d6500 white balance than matching a physical lamp. For example if i have to work with raw files from several cameras: then if they could all reliably be set to a 6500k white balance, they would at least all match eachother regardless of the actual lamp temperature, saving one self the work of having to grade each camera individually.
Also look at the rawtoaces project. it is specifically made to convert raw files to aces colourspace which is calibrated around a 6000k (?) whitepoint. Unfortunately not all cameras have had their sensors analysed for spectral sensitivity.
Your question is 'is there any reason the derived values should be more correct....'
My question is: 'more correct for WHAT?'
Real scene is lit by some real (daylight) light source, not (imaginary/synthetic) black body at 6500K.
Both settings are 'not correct' for real image/real scene.
sorry but that doesn't really answer the question
Unless your scene is lit by exact D65 (imaginary) light source, both values are 'not correct'
So when you say daylight colour profile, its a preset that comes with libraw vs what the camera thinks it should be? is there any reason the derived values should be more correct than the makernote values?