Hi all, I'm rather new to using libraw, so I apologize if this question is somewhat naive. I'm trying to build a set of tools for handling DLSR captures of film negatives, and I'm attempting some tests to verify my guess that performing several steps in the negative inversion process using the RAW sensor data rather than interpolated data that has been converted to a color space will provide some advantages. I'm using a RAW capture of a color film negative from an A7r IV that I've converted to a DNG for this test.
My first step is a simple attempt to invert the color data in a RAW capture of a film negative. I did this by determining the black level and maximum value of the sensor data in the raw_image, and then iterating over the the data in raw_image and setting
ImageProcessor.imgdata.rawdata.raw_image[i] = ImageProcessor.imgdata.color.maximum - ImageProcessor.imgdata.rawdata.raw_image[i];. I figured I could just take every value in the array and subtract it from the maximum value to get the inverted image. I then ran
dcraw_ppm_tiff_writer() to generate a TIFF.
The approach does in fact generate an image, but the image in question is almost entirely magenta. I feel like I'm missing something very fundamental about what's going on here, and I'm hoping someone can provide me with some guidance. Thanks!