Add new comment

How to use LibRaw for writing one's own demosaicking algorithm

Dear All, thank you very much for this really nice piece of opensource software !

I would like to write an inverse problem solver, that take as input multiple RAW files (taken from the same image), and outputs 3 RGB files for the image.
The way demosaicking is performed is up to me, but I would like to be able to rely on LibRaw in order to consistently know the mapping between each pixel coordinate (x_i, y_i) and its color channels (R,V or B), wether the input raw comes from cannon/nikon/sony/pentax,... .

I would like to use the C++ interface of LibRaw, but unfortunately, I cannot find my usecase in the documentation.

Can someone help me, or point me to some examples of getting a simple way to map coordinates to color channel ?
It would be even better if I was able to derive a C++ mapping code that can be run on GPU, where I would need a switch like this:

switch (bayer_pattern) {
case(pattern::Bayer0):
//we know that x_i+y_i%2 = 0 stands for green channel
//else we know that yi%2 = stands for red channel
//else we know that xi%2 = stands for blue channel
break
case(pattern::Bayer1):
//other pattern known at compile time[...]
break;
}

Thank you very much in advance for your help.

Forums: