I have an image storage problem and what I'm asking here is what approach to take, so I need to explain the problem to know what to ask.. Please bear with me.
So, when I shoot, I almost always take 2-4 shots in burst mode. Technically failed frames are dropped, and in case of static scenes, the remaining ones are often combined together, either via HDR combining (from different exposure values) or ALE (from identical exposure values).
Now, I want to save the original raw data, in case I want to reprocess the image again (lets say, noticing some problem later on, or getting more advanced noise filtering algorithms). This means that I now have several raw images that are almost the same, when looking with human eye, but bitwise very different, so 3 raw files ends up taking 3 times as much disk space than a single shot.
The idea that I have is to write a tool that would take the ALE (Anti-Lamenessing Engine) -combined version and represent each raw frame as difference to this one. ALE is very, very good at aligning images and it can export the alignment data, so to produce the "raw" frame I would then need to apply the transformations describe to the combined frame (in reverse) and add the difference frame, and end up with bit-by-bit identical with the raw frame.
Now, the problem is, I don't have a tool to _write_ Canon raw files. If libraw did any noise filtering, dead pixel correction etc to the image, I need to be able to somehow run those again to the "raw" files I created myself when uncompressing, not to mention interpolating the pixel components that were not actually measured. I see two ways to do this: 1) create an program that can write and replicate the raw format used by my cameras, or 2) have a way in libraw to give it a tiff/png image with each pixel having only one of the color components present (lets assume bayer here) and have it process the said image like it was just read from proprietary camera binary format.
So, the question is, which of these approaches do you see more viable?
And yes, if the input images are very noisy, it is completely possible that this method of compression does not really compress, or the resulting images are larger than the original. Noise obviously compresses very badly, but I assume here that most images have much of their image area covered with signal that's stronger than the noise.