Recent comments

Reply to: Sony A7 II and LibRaw 0.19   4 years 6 months ago

> by the time you have completed the unpack() operation, does the LibRaw code know if the camera is/isn't supported

The philosophy is to try to support those cameras that are "not listed" too, even if the support is limited. In a lot of cases a file coming from a non-listed camera can be unpacked, but the support is limited. IsSupported implies yes or no as an answer. Grau, teurer Freund, ist alle Theorie, Und grün des Lebens goldner Baum.

Reply to: Sony A7 II and LibRaw 0.19   4 years 6 months ago

Many thanks - probably better to use "Sony ILCE-7M2 / A7 II" no point in replicating the Sony word!

A thought just occurred to me - by the time you have completed the unpack() operation, does the LibRaw code know if the camera is/isn't supported. If it does know that, it would be simple to add a new function:

bool LibRaw::isSupported()

method that a client application can call after calling unpack() ...

Thanks again
David

Reply to: Sony A7 II and LibRaw 0.19   4 years 6 months ago

Dear Sir:

Checking camera support against the list of names isn't going to provide a definitive "not supported" answer, because, for example, not all the camera aliases are known (for example, Leica C-Lux can have CAM-DC25 as a name in EXIF). We add such aliases as we discover them.

Unfortunately, we don't have a better solution at the moment, and it's not easy to come up with a one.

You are absolutely right re Sony, we will take care of this, thank you for your suggestion.

Reply to: Canon CR3 Support   4 years 6 months ago

То be included in next snapshot.

I've answered to your E-mail message about GFX100 on Fri, 6 Sep 2019.

Reply to: DNG image with YCbCr photometric   4 years 6 months ago

Please check your e-mail.

Reply to: Canon CR3 Support   4 years 6 months ago

Hello, are there any news about Fuji GFX100?
(I guess, my letters got to a spam, so I'm duplicating my questions here).

Reply to: Possible bug with Panasonic G9 hi-res raw files   4 years 6 months ago

Thank you for the sample files (downloaded, you may remove them to clean up OneDrive space).

1st: we do not see green tint in our software that uses LibRaw for raw decoding and/or rendering (FastRawViewer, RawDigger). We do not know what specific version(s) of LibRaw is used in your apps and how app vendors use it (only for raw decoding, or also for rendering), so it is hard to help w/ this specific case.

2nd: Garbage at right edge of high-res image confirmed. We will adjust the size of sensor technological area to be cropped out in future LibRaw update (coming soon).

Thank you again for detailed problem report and for the RAWs.

Reply to: MSVC 2017 & auto_ptr() / Build System   4 years 6 months ago

Followup: the issue solved in current public snapshot via #if __cplusplus >= 201103L || (defined(_CPPLIB_VER) && _CPPLIB_VER >= 520) and/or LIBRAW_USE_AUTOPTR define

Reply to: MSVC 2017 & auto_ptr() / Build System   4 years 6 months ago

What is minimal compiler/libstdc++/libc++ for that?

Will it compile on MacOS 10.6 using XCode 4.3 (for example)?

Reply to: MSVC 2017 & auto_ptr() / Build System   4 years 6 months ago

If you are willing to build it yourself, the edits are trivial to make it use std::unique_ptr instead.. Just replace the two std::auto_ptr instances in the libraw_datastream.h with std::unique_ptr (along with the export)

Then go into the CPP file and make sure that

a) every construction of the filebuf that formerly looked like this:

 std::auto_ptr<std::filebuf> buf(new std::filebuf());

Now looks like this:

auto buf = std::make_unique<std::filebuf>();

Also, wrap each assignment of what used to be an auto-ptr with std::move. So, for example, this line:

f = saved_f;

changes to this:

f = std::move(saved_f);

Reply to: How to compile LibRaw with RawSpeed?   4 years 6 months ago

Didn't even start.

Reply to: How to compile LibRaw with RawSpeed?   4 years 6 months ago

it seems this issue 100 had been closed. Do you have any progress on new rawspeed lib?

Reply to: unpack() performance?   4 years 7 months ago

Looks like you have some runtime checks turned on (not initialized variables, array bounds check, etc).

Also, Visual Studio's heap debug slows down every program w/ large memory allocations (when run inside Visual Studio)

Reply to: unpack() performance?   4 years 7 months ago

Problem solved.

I think it was due to my C++/CLR project. When tested from Visual Studio in release, unpack time was 11s.
When tested outside of Visual Studio, time was 2.6s.
Then I tested to use the libraw.dll instead of embedding the code in my project directly and unpack time was 1.1 sec, same timing than the postprocessing_benchmark project.

I still don't know why the dll is faster than compiling the code in my project, but that's fine :)

Thank again for confirming that I had a problem!

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

The "has been subtracted at the unpacking stage" statement covers "possibility": if black pattern is not covered by black/cblack[], the only way is to subtract it on unpack to prevent LibRaw user from camera-specific details.

From user's point, there is no difference between 'black subtracted by camera' (e.g. old Nikons) and 'black subtracted on unpack()'

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

hi,

no need to shout at me, I was documenting my progress here for others to hopefully help. Maybe I should have kept it to myself.

> blacks are NOT subtracted by unpack().

... may I humbly refer you to your documentation, which clearly states:

Structure libraw_colordata_t: Color Information
...
unsigned black;
Black level. Depending on the camera, it may be zero (this means that black has been subtracted at the unpacking stage or by the camera itself)
...

I do not wish to use the 4-component-distribution version of the data for several reasons (one, not the least, being performance, my current very crude reverse-engineered version is slightly faster than the original and that is without optimization and converting large loops to CUDA). That is why I was asking questions in order to better understand what happens under the hood.
In the end I want to use libraw for reading/unpacking RAW data and do all post-processing after that step myself, to better integrate it into pipelines. Recreating the reading part would be possible since my usecases only cover about 10 different camera models/brands and I have limitless access to all of them, but libraw (dcraw) is doing a great job there, so why reinvent the wheel ...

Thanks for the advice you gave (pointing me back to ninedegrees)! If we were able to communicate better I would volunteer for reworking the documentation, obviously, that would not be wise. Therefor: Thank you for the work you invest in maintaining libraw!

Mact

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

"... - call "subtract_black()" (just in case black has not been subtracted by "unpack")..."

blacks are NOT subtracted by unpack().
If you wish to use Libraw's imgdata.image[] array (not imgdata.rawdata.*), you also need to call raw2image() or raw2image_ex(). The second one could subtract black if called with (true) arg (and if other parameters you use do not prevent black subtraction on copy).

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

So ... by not using cam_mul[] for scaling, but instead pre_mul[] and reproducing dcraw's one-step scaling-and-white-balancing I am getting a stable result.
Interestingly, this is not the (exact) same hue that dcraw (and libraw) is creating with gamma 1/1, no-auto-bright and output "raw" (unaltered camera color) *or* output "sRGB": dcraw/libraw's output is slightly (not much, really) more on the blue side, ever so slightly less saturated. Not sure why that is. Maybe because I am doing most calculations on double-precision, not integer? Since the difference is really small, I am fine with it.

I do have to normalize the image for dark/maximum (the black value and the maximum value from imgdata.rawdata.color) in order for dcraw's scaling to work correctly. I could not find the function in dcraw that is equivalent to this normalization step, but leaving it out gives too dark results, so it must be correct (I assume that scale_colors actually looks for high/low points in the image, there are some parts of the code that are still opaque to me).
This also means that my assumption above seems to be correct: Normalization is pre-demosaic, not post-demosaic.

My steps now are:
- load file
- unpack
- call "subtract_black()" (just in case black has not been subtracted by "unpack")
- normalize to embedded dark/white values (or "minimum/maximum", dcraw's mixture of terms is not helping ...)
- scale/white-balance based on pre_mul[]
- demosaic
- apply color profile (using the pre-calculated cam-rgb-to-xyz-to-output-rgb)
- apply rotation

BTW: I found lclevy’s discussion on scaling quite helpful (the site is linked from the ninedegreesbelow website).There's a caveat about that website, though (and that MAY have misled me, I cannot remember): The code quoted there about dark-subtraction uses cblack[], NOT the "global" black value. That is due to the code having populated cblack[] with "complete" black values before. This might add to why my scaling was off.

Mact

Reply to: How to create ICC profile for camera   4 years 7 months ago

... you may want to look at www.color.org for fundamentals about ICC profiles (which are coming from the printing world, actually) or, if you want to be slightly more "compatible" with the (RGB based) additive color world, visit opencolorio.org for an alternative approach.

In general, creating a profile from a target shot is no "magic", the problem really is about peeking the "right" spot on the image. In my experience averaging a spot center area (after you have found the target spots by pattern recognition) does not really always give the best results.

You can have a look at lprof (search at Sourceforge) - either for using it as an open source "profiler solution" or for learning from the code how to set up your profiler code.
Alternatively you might want to consider ColorChecker (XRite), which is a free tool that creates quite good ICC profiles.

I hope this helps.

Mact

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

Yes, that phrasing is better - you could copy&paste that into documentation, the simple addition of "correction TO BASE black" makes it clearer!

Mact

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

cblack is *per-channel correction to (base) black*.

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

Hi, Alex,

thanks!

The docs only say "can be zero" about the "black" value (which, in my case, is !=0), but not about cblack (which IS 0 for me). I think the docs could be improved by mentioning the same about the cblack values, would have helped me :-)
The 0 is in cblack even after "unpack()".

I don't see a problem here, it's really just minimizing headache for users ...

Update on my progress will come, I'm now trying to better understand the code with help of the ninedegrees article.

Mact

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

There are two black-level fields.

Quote from docs: https://www.libraw.org/docs/API-datastruct.html#libraw_colordata_t

unsigned black;
Black level. Depending on the camera, it may be zero (this means that black has been subtracted at the unpacking stage or by the camera itself), calculated at the unpacking stage, read from the RAW file, or hardcoded.

unsigned cblack[4102];
Per-channel black level correction. First 4 values are per-channel correction, next two are black level pattern block size, than cblack[4]*cblack[5] correction values (for indexes [6....6+cblack[4]*cblack[5]).

For Sony files (checked w/ A7R-IV sample), black is 512 and cblack[0..3] are zero after open_file() call.

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

Just an interim update:

libraw reports cblack values as all 0 for me after unpack()/subtract_black(). Yet, the actual data (RAW image at imgdata.rawdata.raw_image) has minimum 509 and maximum 6069 before and after subtract_black().

To me it looks like subtract_black() either does not use "imgdata.rawdata.raw_image" (since its data has not been changed) or I need to add another step between unpack() and subtract_black() to actually get the black data populated. I will give reading the code another go.

To be continued.

Mact

Reply to: Order of operation and correct level-adjustment   4 years 7 months ago

Colour is what camera measured and recorded (less metameric error). The goal is to preserve colour it measured and present it in a colorimetric space. One of the methods is to find and apply an optimized transform between device data space, which isn't colorimetric, and XYZ or Lab colour space. There are few methods of dealing with metameric error, too. It's one of those calibration class tasks. Next, we move to perceptual matching, and those operations do change colour.

Pages