Add new comment

Oh, they're non-dcraw adjustments which I coded.

I was serious that I do color, whitebalance, exposure, etc corrections to the raw data before putting it into LibRaw. I wrote the adjustments, and only use LibRaw to demosaic. No dcraw or LibRaw emulation of dcraw involved for color-correction / whitebalance / etc.

I wish it were the case that the resulting 8bit images without auto_bright were 'near black,' but take a histogram of this result and see for yourself:

(Output with output_bps = 8, no_auto_bright = 1, no_auto_scale = 1, user_black = 0.) Every pixel is 0, and this was an overexposed 8bit raw. Do what you will with this information; I was just trying to repay a favor, thinking I might have found unintended behavior. I see now that LibRaw uses 16bit internally, even if the input buffer was only 8 BPP.

So, what seems to be happening...
1. 8 BPP data in the buffer has max value of 255
2. The buffer is read into LibRaw, which uses 16 BPP data internally
3. no_auto_bright = 1, so values are never scaled up to 65535
4. output_bps = 8, so values are all divided by 257 to scale down to 255 maxval... but at most, we had (255 / 257), which with integer rounding down equals 0. Anything less than 255 will of course also become 0.
5. Every pixel is 0 in the resulting image!

If that's correct (?), let this post just be a warning to any 8bit - LibRaw users who read this!