Recent comments

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

1.0 is after normalization, so if you read my reply above you should be asking yourself, "what is the correct denominator for normalization?"
How to do it right? - but there is a lot of open source code available, including in libraw. Have you looked at the code in our samples folder? Also, there are books and papers available.

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

> Why are you normalizing to 1 and after that ruining normalization with multiplication?
As opposed to doing what? It's hard for me to explain what I'm doing wrong if no one explains how to do things right. The colour space matrix multiplication is kind of an afterthought to the way I'm doing this, what I do makes more sense if you exclude it. Every stage of my processing is meant to be visible with a rather consistent approach to gain, so naturally it starts with subtracting black levels and normalisation. I divide the WB coefs from cam_mul by their minimum (green, because I don't understand why the green coef should ever be less than 1.0), so that green stays normalised while red and blue can go higher than 1.0 but are clipped on screen (but not in the data) so that at that stage highlights are white on screen but purple in the data.

Is it after the white balance multiplication that I should clip the data to 1.0? I don't like that it means I'm throwing data away, but I guess that makes sense if I'm not going to do highlight recovery and that the matrix multiplication will make the higher red and blue values push green down. So I guess I should clip, do the colour space conversion, then make things normalised again by dividing everything by the lowest value from the result of (1 , 1 , 1) × the matrix (and then clip again?).

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

Why are you normalizing to 1 and after that ruining normalization with multiplication? Not to mention that your justification for multiplication the way you do it is not convincing.
If you have green at clipping already in raw data, and white balance promotes red and blue channels, you have minus green, that's magenta / purple, as an magentish highlights. So the question is: what clipping value to use? Suppose you have a 14-bit raw, and the green channel clips at the value of 14000 (that's before subtracting black).

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

"You are multiplying _after_ normalization"
I'm not sure what you mean and I don't understand the relevance, since normalisation doesn't do much besides scaling values.

"you are not applying proper clipping"
Well, what is proper clipping? I thought I'm supposed to do everything and then clip at the very end, but clearly that won't solve the dark purple highlights. Am I rather supposed to boost the values of maxed out pixels?

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

2) It's a pretty sensible step if you think about interpreting the bayer image from a signal processing point of view, each red or blue channel has 1 pixel set for every 4 pixels, so if you're going to use any kind of low pass filtering then in order to have the correct energy you need to multiply everything by 4 (or 2 for green) to create a valid intermediary representation so that the average pixel values for the whole image would be the same before and after such low pass filter-based demosaicing, like blurring, halving or interpolating, because each set pixel will have to be averaged with 3 black pixels. I wouldn't call that white balance at all, the image still comes off as mostly green and applying the white balance coefs makes it look right albeit desaturated if I don't do the colour space conversion.

1 & 3) So you are saying that white balance should be done before the matrix multiplication? This perplexes me because usually white balance correction in an editor is done by the user towards the end of the processing chain, as in your editor gives you a debayered colour space-converted defringed rectilinearised image and then it applies white balance but if you don't like it you can do your own additional white balance which is a simple rgb multiplication, or so it seems to be since it seems WB adjustment is a lightweight operation. But to be consistent any WB change by the user in an editor would have to be done before even debayering and everything else would have to be done all over again for each user adjustment of WB. It sounds like that's what should correctly be done, right?

The reason I made this thread is that I first tried doing WB correction followed by the matrix multiplication, but in some cases it gives me rather dark purple highlights, so I thought there must be something wrong.

Here's my processing with WB followed by matrix multiplication: https://i.imgur.com/F3FjABw.png
Here's the libraw_dcraw_process() output with use_camera_wb set to 1 and highlight set to 0 (clip, so that means no reconstruction, right?): https://i.imgur.com/Y2uxhWl.png

If all 3 channels are maxed out but WB correction boosts red and blue and the matrix multiplication essentially increases the saturation then I guess it makes sense that it would turn out that way, but clearly I'm missing something rather important to avoid getting purple street lamps.

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

You are multiplying _after_ normalization, and you are using wb coefficients that are valid before your multiplication.
You have purple highlights because you are not applying proper clipping. To operate in energy domain you need to have a better picture of spectral response, before and after demosaicking.

Reply to: What to do with rgb_cam matrix   4 years 11 months ago

1) White balance should be applied before interpolation (half and bilinear demosaics could handle non-balanced data, but others are not).
2) I could not understand this step:
"Multiply each pixel by its bayer filter colour times 2 for green and times 4 for red/blue"
Is it 'poor man approximate white balance'? If so, WB values should be adjusted to 0.5/0.25 :)

3) rgb_cam is 'camera to sRGB' matrix. If your output space is not sRGB, color profile matrix (rgb_cam) should be adjusted to output space (see convert_to_rgb() source for an example)

Reply to: Fujitsu Super-CCD files not processed "correctly"   4 years 11 months ago

I could well be barking up the wrong tree, but doesn't the code handle merging of 4-shot Pentax files which is really quite a similar concept (afaict).

Luckily, I'm not too bothered by this (though one of my users is nagging me about it).

Reply to: Fujitsu Super-CCD files not processed "correctly"   4 years 11 months ago

Yes, use shot_select parameter to extract second frame.

Correct processing/merge/etc is the task of your code. LibRaw::dcraw_process() does not do merge.

Reply to: LibRaw 0.20 supported cameras   4 years 11 months ago

Do you have a schedule for this?

Reply to: LibRaw 0.20 supported cameras   4 years 11 months ago
yes

yes

Reply to: LibRaw 0.20 supported cameras   4 years 11 months ago

Is it planned to support the Canon EOS R?

Reply to: CR3 support   4 years 11 months ago
Reply to: Processing Fuji raw_image   4 years 11 months ago

Is there a getter for libraw_internal_data.unpacker_data.fuji_layout or will I need to sub-class to get at that?

I found is_fuji_rotated() which returns libraw_internal_data.internal_output_params.fuji_width

Thanks

Reply to: Processing Fuji raw_image   4 years 11 months ago

Yes, this is right piece of code

Reply to: Processing Fuji raw_image   4 years 11 months ago

I couldn't locate the exact version I'm using in github (I'm using version 18.8) The code in question reads as follows:

    // Move saved bitmap to imgdata.image
    if( imgdata.idata.filters || P1.colors == 1)
      {
        if (IO.fuji_width) {
          unsigned r,c;
          int row,col;
          for (row=0; row < S.raw_height-S.top_margin*2; row++) {
            for (col=0; col < IO.fuji_width << !libraw_internal_data.unpacker_data.fuji_layout; col++) {
              if (libraw_internal_data.unpacker_data.fuji_layout) {
                r = IO.fuji_width - 1 - col + (row >> 1);
                c = col + ((row+1) >> 1);
              } else {
                r = IO.fuji_width - 1 + row - (col >> 1);
                c = row + ((col+1) >> 1);
              }
              if (r < S.height && c < S.width)
                imgdata.image[((r)>>IO.shrink)*S.iwidth+((c)>>IO.shrink)][FC(r,c)]
                  = imgdata.rawdata.raw_image[(row+S.top_margin)*S.raw_pitch/2+(col+S.left_margin)];
            }
          }
        }
Reply to: Processing Fuji raw_image   4 years 11 months ago

We do not have any plans to drop SuperCCD support:
- raw data is extracted as is (in two subframes), keeping sensor aspect ratio
- processing is adopted from dcraw

Also, we do not have any plans to improve processing part, our goal is raw data and metadata extraction.

Reply to: Processing Fuji raw_image   4 years 11 months ago

I do not know what exact version you use. Could you please use github URLs with #l[lineno] markers in URL to point to exact version/exact line.

Reply to: Processing Fuji raw_image   4 years 11 months ago

Just to be sure, can you confirm that the code at line 2783 in libraw_cxx.cpp is the relevant code?

Thanks

Reply to: Processing Fuji raw_image   4 years 11 months ago

Please don't drop Fujitsu Super-CCD support, if that's done, people who use my software won't be able to reprocess old images (which the astrophotography folks often do).

I'll take a look at raw2image_ex() to see I can understand it.

Dave

Reply to: Processing Fuji raw_image   4 years 11 months ago

Yes, Fuji Super-CCD is completely different.
Look into raw2image_ex() source for details.

BTW, today, in 2019, it is good enough idea to drop Super-CCD support.

Reply to: color.maximum and camera white level   4 years 11 months ago

AFAIK, this is vendor-specified value for white point.

Reply to: color.maximum and camera white level   4 years 11 months ago

It's great that you get it from the CR2 file, but you didn't answer my question - does it represent the "white level" or not? And as an aside is it normal to see pixel values > that level?

Reply to: color.maximum and camera white level   4 years 11 months ago

For Canon cameras, imgdata.color.maximum is set by metadata provided by vendor in CR2 file.

Not doing so results in 'pink highlights' problem.

Pages