Add new comment

Thanks Alex. Could you please

Thanks Alex. Could you please explain to me why it is better to manipulate raw data in the linear domain? I have seen this mentioned before, but the reasoning behind it isn't clear to me. The number of individual brightness levels available is determined by the bit depth of the camera, so any interpolated values in the linear domain (calculated during image manipulation) will have to be remapped to this same set of brightness levels. What am I missing?

Another question: I have noticed that some images (e.g. ARW images from the Sony A7) are reported as having top_margin=0 and left_margin=0, but width does not equal raw_width. Is this just a case of masked pixels being on the *right* of the image (in my experience they are usually on the left)? The DNG SDK reports the width as the full raw_width, and on examination the DNG converted images fill this extra image area with repeated values of the pixel at position(x,width), i.e. it duplicates the last pixel in the row to fill the image area between width and raw_width.

Any comments would be greatly appreciated!