It *appears* that LibRaw's normal processing sequence is to convert the mosaicked image pixel data into an intermediate (YUV?) representation, which may be manipulated for noise, etc; that in turn is converted into a picture-ready RGB (which apparently varies from the camera RGB (or camera CMY); that RGB in turn is gamma-adjusted and then formatted appropriately for the output file format.
• Is this approximately correct? Anything missing especially?
• My application is for a trial de-mosaicking / noise reduction approach. It appears that converting the raw image to YUV might lose some info, and anyway that for my purposes it represents an unnecessary complication. I *have thought* that I could fill out all three colors using “cameraRGB” and then matrix that directly to “picture-ready RGB.” What am I missing?
• Using Mathematica, which reportedly uses LibRaw, I have been able to reverse-engineer both the color matrix and gamma functions to very close extent. Except, not for darker areas of the picture where one or more of the matrix results go negative in my reverse-engineered version. Reading the LibRaw code is tedious to my non-C++-schooled eyes and I cannot find the logic that processes CameraRGB —> YUV(?) —> PictureRGB —> GammafiedRGB in order to find how LibRaw (which *mostly* works better than my reverse-engineered version) actually gets the correct coefficients. An annotated data-flow, perhaps with references to files/lines would be so wonderful.
• I'd be happy to engage LibRaw an aficionado to streamline these functions that are obviously necessary, but a distraction to my demosaicking work. Is this the place to find help, already existing or on-demand?