LibRaw 0.20.2 Release

LibRaw 0.20.2 Release (new major release/new patch) - is just uploaded to github repository and to this site download page.

LibRaw 0.20.2 changes (relative to 0.20.1)

Reverted 0.20.1 change:
    const buffer for open_buffer() and open_bayer() calls
Because of 0.20.0 ABI break

LibRaw 0.20.1 changes (relative to 0.20.0)

Improvements:

  • exif callback is called on EXIF GPS and EXIF Interop IFDs
  • open_bayer call documented
  • Canon (ColorDatsSubver==32): parse Specular White instead of hardcoded value

Fixes for normal files processing:

  • Olympus XZ-1: do not provide linear_max (it is wrong in metadata)
  • Nikon Z cameras: added space in camera list
  • raw-identify: fixed wb-preset print
  • Pentax Optio 33WR: maker index was incorrect
  • dcraw_emu: corrected help line for -6 option.
  • raw-identify: corrected range check for color matrices print
  • use_camera_matrix option: fixed a bug introduced when making compiler more happy.

Fixes for damaged/special crafted files processing:

  • Fix for truncated CR3 files parsing
  • DNG metadata merger: all color loops are limited to MIN(4,colors)
  • Check for marings: should be less than raw image size
  • Check for xmpdata present in Samsung Lens ID assignment
  • Check for column range in leaf_hdr decoder
  • Additional checks in Hasselblad model parser
  • Fuji rotate: better limits check
  • DNG files: limit tiff_samples

Not fixes, but makes ASAN/compilers/etc happy:

  • corrected GPS EXIF output
  • const buffer for open_buffer() and open_bayer() calls

What's new and what's changed (relative to LibRaw 0.19):

Camera Format support

  • Canon CR3
  • GoPro (via GPR SDK)
  • Panasonic 14-bit
  • Fujifilm compressed/16bit
  • Rapsberry Pi RAW+JPEG format (if USE_6BY9RPI defined)
  • Foveon X3F support changed: it is supported only if USE_X3FTOOLS defined at build (see below for 'Imported code policy disclaimer')

Camera support (1133 total)

  • Canon: PowerShot G5 X Mark II, G7 X Mark III, SX70 HS, EOS R, EOS RP, EOS 90D, EOS 250D, EOS M6 Mark II, EOS M50, EOS M200, EOS 1DX Mark III (lossless files only)
  • DJI Mavic Air, Air2, Osmo Action
  • FujiFilm GFX 100, X-A7, X-Pro3, X100V, X-T4 (uncompressed and lossless compressed only), X-T200
  • GoPro Fusion, HERO5, HERO6, HERO7, HERO8
  • Hasselblad L1D-20c, X1D II 50C
  • Leica D-LUX7, Q-P, Q2, V-LUX5, C-Lux / CAM-DC25, SL2, M10 Monochrom
  • Nikon D780, Z50, P950
  • Olympus TG-6, E-M5 Mark III, E-PL10, E-M1 Mark III,
  • Panasonic DC-FZ1000 II, DC-G90, DC-S1, DC-S1R, DC-S1H, DC-TZ95
  • PhaseOne IQ4 150MP
  • Ricoh GR III
  • Sony A7R IV, A9 II, ILCE-6100, ILCE-6600, RX0 II, RX100 VII
  • Zenit M
  • also multiple smartphones (the tested ones are listed in LibRaw::cameraList)

Source code re-arranged

  • dcraw.c is not used in the generation and build processes
  • dcraw_common.cpp and libraw_cxx.cpp are split into multiple code chunks placed in separate subfolders (decoders/ for raw data decoders, metadata/ for metadata parsers, etc)
  • dcraw_common.cpp and libraw_cxx.cpp remain to preserve existing build environments (these files are now just a bunch of #include directives).
  • It is possible to build LibRaw (It may be useful to reduce library memory/code footprint.)
    • without postprocessing functions (dcraw_process() and called function)
    • without postprocessing and LibRaw::raw2image() call (and called function).
    • See Makefile.devel.nopp and Makefile.devel.noppr2i for the list of source files needed to build reduced/stripped library.

Normalized make/model

There is a huge number of identical cameras sold under different names, depending on the market (e.g. multiple Panasonic or Canon models) and even some identical cameras sold under different brands
(Panasonic -> Leica, Sony -> Hasselblad).

To reduce clutter, a normalization mechanism has been implemented in LibRaw:

In imgdata.idata:

  • char normalized_make[64]; - primary vendor name (e.g. Panasonic for Leica re-branded cameras)
  • char normalized_model[64]; - primary camera model name
  • unsigned maker_index; - primary vendor name in indexed form (enum LibRaw_cameramaker_index, LIBRAW_CAMERAMAKER_* constant).

These fields are always filled upon LibRaw::open_file()/open_buffer() calls.

const char* LibRaw::cameramakeridx2maker(int index): converts maker_index to normalized_make.

We recommend that you use these normalized names in a variety of data tables (color profiles, etc.) to reduce the number of duplicate entries.

New vendor index values will be added strictly to the end of the LibRaw_cameramaker_index table, ensuring that the numbers assigned to vendors that are already known to LibRaw will not change.

DNG frame selection (and other changes)

DNG frames selection code re-worked:

  • by default all frames w/ the NewSubfileType tag equal to 0 (high-res image) are added to the list of available images (selection performed via imgdata.params.shot_select field, as usual)
  • the special case for Fuju SuperCCD (SamplesPerPixel == 2) works as before: shot_select=1 will extract second sub-image.
  • Additional flags to imgdata.params.raw_processing_options:
    • LIBRAW_PROCESSING_DNG_ADD_ENHANCED - will add Enhanced DNG frame (NewSubfileType == 16) to the list of available frames
    • LIBRAW_PROCESSING_DNG_ADD_PREVIEWS - will add previews (NewSubfileType == 1) to the list.
  • By default, DNG frames are not reordered and are available in same order as in DNG (LibRaw traverses IFD/Sub-IFD trees in deep-first order). To prioritize the largest image, set LIBRAW_PROCESSING_DNG_PREFER_LARGEST_IMAGE bit in imgdata.params.raw_processing_options.

DNG Stage2/Stage3 processing via DNG SDK (request via flags in raw_processing_options)

Imported code policy disclaimer

We've changed the policy regarding 3rd party code imported into LibRaw:

We (like other authors of open-source RAW parsers) gladly import support code for various RAW formats from other projects (if the license allows it).
This is done to expand camera support.
Unfortunately, not all imported code can tolerate truncated or otherwise damaged raw files, as well as arbitrary conditions or arbitrary data; not all authors handle rejecting unexpected input well.
LibRaw is now widely used in various projects, including ImageMagick, which, in turn, is often used on web sites to process any input images, including arbitrary data from unknown users. This opens up wide possibilities for exploiting the various vulnerabilities present in the code borrowed from other projects into LibRaw. In order to avoid such security risks, - the borrowed code will no longer compile by default.
We are not able to support it in general case, and the authors refuse to add code to reject unexpected input.
Thus, if you use some kind of camera for which the support is disabled by default, you need to recompile LibRaw for your specific case.

Formats currently affected:

  • X3F (Foveon) file format.
    Code is imported from Kalpanika X3F tools: https://github.com/Kalpanika/x3f
    To turn the support on, define USE_X3FTOOLS
  • Rapsberry Pi RAW+JPEG format.
    Code is imported from https://github.com/6by9/dcraw/
    To turn the support on, define USE_6BY9RPI

Format support is indicated via LibRaw::capabilities() call with flags:
LIBRAW_CAPS_X3FTOOLS - Foveon support
LIBRAW_CAPS_RPI6BY9 - RPi RAW+JPEG support

GoPro .gpr format support

GoPro format supported via open-source GPR SDK See README.GoPro.txt for details.

Windows support/Windows unicode (wchar_t*) filenames support

  • (old) LibRaw's WIN32 external define split into 3 defines to fine tune compiler/api compatibility:
  • LIBRAW_WIN32_DLLDEFS - use to compile DLLs (__dllimport/__dllexport attributes)
  • LIBRAW_WIN32_UNICODEPATHS - indicates that runtime has calls/datatypes for wchar_t filenames
  • LIBRAW_WIN32_CALLS - use Win32 calls where appropriative (binary mode for files, LibRaw_windows_datastream, _snprintf instead of snprintf, etc).

If the (old) WIN32 macro is defined at compile time, all three new defines are defined in libraw.h
If not, these defines are defined based on compiler version/libc++ defines

LibRaw::open_file(wchar_t*) is always compiled in under Windows, but if LIBRAW_WIN32_UNICODEPATHS (see above) is not defined, this call will return LIBRAW_NOT_IMPLEMENTED.

Use (LibRaw::capabilities() & LIBRAW_CAPS_UNICODEPATHS) on runtime to check that this call was really implemented (or check for #ifdef LIBRAW_WIN32_UNICODEPATHS after #include <libraw.h>)

LibRaw*datastream simplified

  • tempbuffer_open, subfile_open are not used, so removed from LibRaw_abstract_datastream and derived classes.
  • jpeg_src() call implemented using ->read() call and own buffering (16k buffer).
  • buffering_off() call added. It should be used in derived classes to switch from buffered reads to unbuffered.

Minor/unsorted changes

  • new flag LIBRAW_WARN_DNGSDK_PROCESSED to indicate decoder used
  • LibRaw::open() call, max_buf_size special meaning:
    • 1 => open using bigfile_datastream
    • 2 => open using file_datastream
  • New processing flag LIBRAW_PROCESSING_PROVIDE_NONSTANDARD_WB If set (default is not), and when applicable, color.cam_mul[] and color.WB_Coeffs/WBCT_Coeffs will contain WB settings for a non-standard workflow. Right now only Sony DSC-F828 is affected: camera-recorded white balance can't be directly applied to raw data because WB is for RGB, while raw data is RGBE.
  • New processing flag: LIBRAW_PROCESSING_CAMERAWB_FALLBACK_TO_DAYLIGHT If set (default is not), LibRaw::dcraw_process() will fallback to daylight WB (excluding some very specific cases like Canon D30). This is how LibRaw 0.19 (and older) works. If not set: LibRaw::dcraw_process() will fallback to calculated auto WB if camera WB is requested, but appropriate white balance was not found in metadata.
  • Add support for zlib during configure
  • Fixed multiple problems found by OSS-Fuzz
  • Lots of changes in imgdata.makernotes (hope someone will document it)
  • DNG SDK could be used (if enabled) to unpack multi-image DNG files.
  • DNG whitelevel calculated via BitsPerSample if not set via tags.
  • DNG: support for LinearDNG w/ BlackLevelRepeat.. pattern
  • Generic Arri camera format replaced w/ list of specific camera models in supported cameras list.
  • new samples/rawtextdump sample: allows one to dump (small selection) of RAW data in text format.
  • samples/raw-identify:
    • +M/-M params (same as in dcraw_emu)
    • -L <file-w-filelist> parameter to get file list from a file
    • -m paramerer to use mmap'ed IO.
    • -t parameter for timing
  • samples/dcraw_emu: fixed +M handling
  • better support for Nikon Coolscan 16-bit NEF files.
  • Visual Studio project files: re-generated to .vcxproj (Visual Studio 2019), different intermediate folders for different sub-projects to allow 1-step rebuild.
  • imgdata.makernotes...cameraspecific: removed the vendor name prefix from variables.
  • Bayer images: ensure that even margins have the same COLOR() for both the active sensor area and the full sensor area.
  • raw processing flag bit LIBRAW_PROCESSING_CHECK_DNG_ILLUMINANT inverted and renamed to LIBRAW_PROCESSING_DONT_CHECK_DNG_ILLUMINANT. If not set, DNG illuminant will be checked.
  • New libraw_decoder_t flags:
    • LIBRAW_DECODER_FLATDATA - in-file data could be used as is (if byte order matches), e.g. via mmap()
    • LIBRAW_DECODER_FLAT_BG2_SWAPPED - special flag for Sony ARQ: indicates R-G-G2-B channel order in 4-color data
  • Camera-recorded image crop data is parsed into imgdata.sizes.raw_inset_crop structure:
    • ctop,cleft,cwidth,cheight - crop size.
    • aspect - LibRawImageAspects enum (3to2, 4to3, etc)
  • New define LIBRAW_NO_WINSOCK2 to not include winsock2.h on compile
  • Google changes cherry-picked (thanks to Jamie Pinheiro)
  • speedup: ppg interpolate: const loop invariant
  • Bugs fixed
    • Fixed several UBs found by OSS Fuzz
    • Fixed several problems found by other fuzzers.
    • Thumbnail size range check (CVE-2020-15503). Thanks to Jennifer Gehrke of Recurity Labs GmbH for problem report.
    • fixed possible overflows in canon and sigma makernotes parsers
    • fixed possible buffer overrun in crx (cr3) decoder
    • fixed memory leak in crx decoder (if compiled with LIBRAW_NO_CR3_MEMPOOL)
    • fixed possible overrun in Sony SRF and SR2 metadata parsers
    • Fixed typo in longitude (member of parsed GPS structure), update required for code that uses it.



Comments

Hi!

Hi!
Is it possible to try Windows binaries of v0.20 beta?
Or when binaries will be available?
Thanks.

No plans for publishing

No plans for publishing binaries or source tarballs for beta.

Binaries and tarballs will be updated on 'release' status.

-- Alex Tutubalin @LibRaw LLC

Is there any schedule /

Is there any schedule / target / roadmap currently for when the final release will drop?

This depends on beta feedback

This depends on beta feedback. Hope to finish Beta-RC-Release cycle in May.

-- Alex Tutubalin @LibRaw LLC

Release date ?

Any projected date for the final release yet?

Lots of warnings building with MSVC

I'm compiling the RC2 code with VS2017, and /W3. I'm getting a lot of warnings like these:

1>c:\users\amonra\documents\github\dss\libraw\src\demosaic\dht_demosaic.cpp(176): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
1>c:\users\amonra\documents\github\dss\libraw\src\demosaic\dht_demosaic.cpp(747): warning C4305: '/=': truncation from 'double' to 'float'

Yes, of course I can disable these warnings, but I'd like to be sure it's OK do that?

Thanks
David

David Partridge

Warning about apply_profile building static library

apply_profile.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library

Is this to be expected?

David Partridge

Checking camera against camera_list

My old code used imgdata.idata.make and imgdata.idata.model to compare against the compiled in list of supported cameras.

Should I now change that code to use imgdata.idata.normalized_make and imgdata.idata.normalized_model ?

Thanks

David Partridge

You can try, but since this

You can try, but since this list is not intended for the use described above, we do not have an answer to this question.

-- Alex Tutubalin @LibRaw LLC

So what is the intended use

So what is the intended use of this list if not that?

David

David Partridge

Dear Sir:

Dear Sir:

1. The list is simply to list the supported cameras, for the users to know. The list is guaranteed not to include all the cameras that are actually supported, only those which we were able to test and only the known aliases - such as camera phones which we were not able to test are not listed, even if the output is in DNG format.

2. There are no plans to make a machine-readable list, not for make/models, not for normalized make/models.

3. The intended use of normalized make/model parameters is explained in the Changelog.txt under "== Normalized make/model =="

4. Using normalized make/model parameter while searching in the camera list may be of some help, albeit very little of it.

--
Iliah Borg

OK If that's your position, I

OK If that's your position, I guess I have to accept it (even if I don't greatly like it). However, in that case what is your proposed mechanism to determine if a given camera is supported?

The reason I use the camera list is that unless you tell me otherwise, it is the only mechanism that allows me to check whether a camera is definitely supported - if it IS in the list, it should work, and if it isn't in the list, it might work, or might fail spectacularly.

David Partridge

This is not 'position' but

This is not some 'position', this is the reality.
It is difficult to unequivocally define what a "camera is supported" is.
That's why we are not trying to give an API for this

-- Alex Tutubalin @LibRaw LLC

Major incompatability with 0.19

Libraw 0.20 identifies the pattern for a Canon EOS 60D as GBRG, whereas 0.19 identified it as RGGB. It seems highly probable to me that this doesn't just apply to the EOS 60D.

The image processes incorrectly as a result :(

David Partridge

I understand your pain.

I understand your pain.

As stated in Changelog:
* Bayer images: ensure that even margins have the same COLOR() for both the active sensor area and the full sensor area.

The only way to achieve this is to ensure that left/top margins are even for bayer images and multiply of 6 for X-Trans images.

Please note, that margins can also change due to firmware update (if margins are read from file metadata).

So, the only way to ensure valid darks, master flats, etc etc is to use full sensor area, not active (visible) area for such data.

You may be pleased to know that very few cameras are affected by this change.

Also you may rollback this change by commenting out these lines in src/utils/open.cpp: https://github.com/LibRaw/LibRaw/blob/master/src/utils/open.cpp#L618-L640

This will not preserve from margins change due to firmware update.

-- Alex Tutubalin @LibRaw LLC

It gets worse - some .CR2

It gets worse - some .CR2 images from the camera come back with filters set to 0x00000000 so they are interpreted as full colour images not CFA so I can't extract the non-deBayered image data and de-Bayer it myself.

Please could you explain why you felt the need to override the margins as you do here. I understand that this impacts on the code for determing a pixels colour depending on whether you are looking at the full sensor area as compared to the active area - but why is that a problem?

The previous code respected the margins set by the manufacturer whereas you are changing the margins which has a massive impact on this application as previously processed master darks, flats etc will no longer be compatible - this is disastrous as users do not always retain all their original darks, etc. so won't be able to rebuild them.

Sure I could modify open.cpp but that becomes a perpetual problem as we would need to remember to change it every time we refreshed the LibRaw code.

Please, please give a lot of thought to reverting this to previous behaviour - this really is a HUGE compatibility issue.

David

David Partridge

Could you please share CR2

Could you please share CR2 samples w/ filters equal to 0 to check them up.

In D60 (and most other old canons) case, margins are not set by manufacturer, but hardcoded in libraw source. For metadata-specified margins there is always a chance that margins will change with firmware update.

So we (again) suggest to consider to use full sensor area for flats, darks, etc.

-- Alex Tutubalin @LibRaw LLC

Not a D60 this is a 60D -

Not a D60 this is a 60D - quite different.

If I change the code to use the full sensor area for all image types, then I will face an uproar from all the users it is just as bad as using a different margin.

I've put a sample image on DropBox

https://www.dropbox.com/s/q9jf90yfzy5fudo/_MG_9458.CR2?dl=0

Note that the images returning filters=0 are also rendered by LibRaw with incorrect colours? I note that these are not FULL size raw images, but mRaw in Canon parlance (medium size Raw).

These images also exhibit the same behaviour in 0.19.

*** UPDATE ***
I just spent 1/2 hour reading up about sRaw and mRaw - they aren't Raw at all!!! I can now see why they are reported with filters=0 and as a colour image. However the colours are still wrong!
*** END UPDATE ***

PS you still haven't explained to me why the previous behaviour needed to be changed.

David Partridge

OK I apologise for the

OK I apologise for the diversion caused by sRaw/mRaw - I hadn't realised they weren't real Raw files!

However my point that changing to use the full image area would also be incompatible is still valid.

I would suggest if I may that instead of changing what was, that you could continue to return the same top/left margin values and filters value that 0.19 did on the assumption that the user WILL use the image margins as defined by firmware etc.. You could also provide an alternative filters value for people who wanted to use the whole sensor area or provide a simple API to return a suitable value for decoding the entire image:

unsigned fullImageFilters = getFullImageFilters();

David Partridge

Further to my previous: The

Further to my previous: The CFA pattern is strictly for the image area of the chip. Everywhere else there is no "CFA" because these pixels are either optically blackened or serve no function, or are 100% white, or something like that.

I really don't understand why this major incompatability has been forced on us it it serves no purpose, and as you haven't told us why it is *needed*, then I must conclude this change has no purpose.

David Partridge

> The CFA pattern is strictly

> The CFA pattern is strictly for the image area of the chip

This is not true, esp. for (affected) Canons

-- Alex Tutubalin @LibRaw LLC

I should add at this point

I should add at this point that your suggestion that we should use the FULL FRAME for astronomical images doesn't work as this will result in corruption of the images where the "special" parts of the frame will corrupt the stacked image when "shift and add" processing (aka dithering), is being used.

If you won't remove this incompatible change, please could I ask that at the very least you make it configurable by passing a parameter to open, or at build time by using the pre-processor for example:

#if !defined(USE_LIBRAW19_MARGINS)

You still haven't explained why you made this change - it doesn't seem to provide great benefit and really messes things up for many existing users.

David Partridge

LibRaw/src/utils/open.cpp | 4

 LibRaw/src/utils/open.cpp | 4 ++++
 1 file changed, 4 insertions(+)
 
diff --git a/LibRaw/src/utils/open.cpp b/LibRaw/src/utils/open.cpp
index 79e99ae..a4798d2 100644
--- a/LibRaw/src/utils/open.cpp
+++ b/LibRaw/src/utils/open.cpp
@@ -593,6 +593,7 @@ int LibRaw::open_datastream(LibRaw_abstract_datastream *stream)
 			  else
 				  parse_fuji_compressed_header();
 		  }
+#if !defined(USE_LIBRAW19_MARGINS)
 		  if (imgdata.idata.filters == 9)
 		  {
 			  // Adjust top/left margins for X-Trans
@@ -614,7 +615,9 @@ int LibRaw::open_datastream(LibRaw_abstract_datastream *stream)
 						  imgdata.idata.xtrans[c1][c2] = imgdata.idata.xtrans_abs[c1][c2];
 			  }
 		  }
+#endif
 	  }
+#if !defined(USE_LIBRAW19_MARGINS)
 	  if (!libraw_internal_data.internal_output_params.fuji_width
 		  && imgdata.idata.filters >= 1000
 		  && ((imgdata.sizes.top_margin % 2) || (imgdata.sizes.left_margin % 2)))
@@ -638,6 +641,7 @@ int LibRaw::open_datastream(LibRaw_abstract_datastream *stream)
 			  filt |= FC((c >> 1) + (crop[1]), (c & 1) + (crop[0])) << c * 2;
 		  imgdata.idata.filters = filt;
 	  }
+#endif
 
 #ifdef USE_DNGSDK
 	  if (

David Partridge

This change will break

This change will break accurate per-channel black level calculation for (affected) Canon cameras.
It may be not your case (if your code uses own blacks calculated from raw data), but will affect other users and may result in excessive banding. So, no plans to implement this #ifdef in main LibRaw source code.

Taking into account the fact that margins may change with firmware update, it is better to change your application to handle this feature accurately.

-- Alex Tutubalin @LibRaw LLC

OK - Now you've told me WHY

OK - Now you've told me WHY you did it - I can see why you don't want to do that, and why I shouldn't do it either.

I'll advise my users of the incompatibility issues. Do you happen to have a list of the cameras that this change affects?

Apologies if this has delayed the final release of 0.20.

David

David Partridge

Followup: margins/visible

Followup: margins/visible area *may* change in future (without notice) when/if we'll decide to switch from hardcoded margins to metadata (camera) -specified ones (for specific vendor, or camera subset)

So we strongly suggest to handle this in your app.

-- Alex Tutubalin @LibRaw LLC

Alex,

Alex,

I've been thinking about this issue at some length and I am trying to get my head around your assertion that the change in question is necessary to ensure accurate per-channel black level calculation. Surely if you apply the per-channel calculation to the "image" area using the correct filter mask for that, then it will all be correct? I can see that if you applied the "image" area filter mask to the whole RAW including the frame area then things could go awry. But you know better that to make that error.

So please explain why you believe my reasoning is incorrect.

Thank you

David Partridge

unfortunately I could not

unfortunately I could not understand the exact wording of the question you are asking.

-- Alex Tutubalin @LibRaw LLC

I don't understand your

I don't understand your statement that this change to the margin size (if odd) is necessary to ensure accurate per-channel black level calculation.

Surely if you apply the per channel calculation to the "Image" area (i.e. NOT including the frame) using the CFA pattern (filter) fr that area, then the calculation will be correct.

I can understand that if you used the CFA for the image area to calculate a per channel black level against the entire sensor area that might not work right, but I am sure you wouldn't do that.

So please explain why you believe my reasoning is incorrect.

David Partridge

It was a different statement:

It was a different statement: your proposed change (commenting out margins/filters adjustment) will result in incorrect interpretation of black level data (if such data is created from dark frame values)

-- Alex Tutubalin @LibRaw LLC

Huh! Now I really don't

Huh! Now I really don't understand - you said that my proposed change caused problem with the per channel black level calibration. Or in other words that the margins that were used by 0.19 and below caused a problem.

I am trying to understand WHY it causes a problem - if you calculate the per channel calibration on the image area (i.e. without the frame), surely it will be correct.

If you are saying the someone else is using LibRaw and is using the full frame area for astronomical image processing calibration frames then they are doing it wrong! And if they persuaded you to make this change to the margins, then you should revert it immediately.

David

David Partridge

Quote from the previous

Quote from the previous message:

> if such data is created from dark frame values

In other words: change to LibRaw::open_datastream() code is not enough if one want to have correct per-channel BL estimation too.

-- Alex Tutubalin @LibRaw LLC

So you saying that you

So you saying that you incorrectly calculate the black levels because it needed a bit of work to do it properly so you made this change instead - shame on you.

You should revert this change and do the per channel black level calculation based on the image area not the full sensor area.

David Partridge

David, please stop it. You

David, please stop it. You need to get a much better grasp of the technical issues with raw, and a way better attitude.

--
Iliah Borg

Same boat

Hi David, this is Jasem from INDI. We have the same issues as well, and this could potentially affect thousands of users. I think at this point, there are two ways:

1. Fork libraw and disable these changes instead of subjecting end-users to deal with this?
2. Add a patch to disable them in the package management scripts?

The only reasonable option

The only reasonable option you have is to adopt the new behavior because it's the correct one, and most probably it's here to stay. Nobody guarantees you the next firmware version will not change the margins, for example because the fab was changed, and thus the order channels in the CFA will change. Your applications shouldn't be designed to relate on mutables to begin with. Now code a converter from old flats to a new format, satisfy your thousands of users, and be done.

FWIW I (and a number of

FWIW I (and a number of others) don't believe it is the "correct" solution, however I do understand that from the perspective of the Libraw developers it is the most expedient solution for general images that don't require calibration.

David Partridge

Oh, don't speak for number of

Oh, don't speak for number of others. Myself, I'm happy LibRaw developers switched to correct and future-proof solution.

I have throughout this

I have throughout this exchange been polite, I quite understand that it might be a bit more work for you to calculate the per channel black levels if the margins are odd values - but throughout you have REFUSED to explain why you made this change. Why won't you explain your reasons?

And if you think I don't understand something - then please tell me I'm wrong and WHY.

This change delivers nothing but pain for me and my users, and so far you still haven't explained why you did it. As for you suggestion that I need to learn more about raw file processing - I use LibRaw so I don't have to know everything there is to know about that.

I have interpreted your statement that having odd margins causes incorrect calculation of per channel black level caibration to mean that it results in LibRaw incorrectly calculating the cblack array. Have I misunderstood? If not why does this not work if you use the image CFA and image data, or if you use the "All Black" pixel array in the margins, can't you just use the appropriate CFA that matches the full frame as compared to the one that applies to the user image area.

If you would prefer that this discussion took place offline, I'm very happy to do that.

Thanks in advance

David Partridge

> I have throughout this

> I have throughout this exchange been polite

But failed:

> You should revert this change and do...

Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum.

So to answer general questions like

> PS you still haven't explained to me why the previous behaviour needed to be changed.

(repeated ad nauseam) does not make sense: the change has been made, we consider it to be right, and do not see it necessary to offer any explanations or excuses, especially when the tone you've suggested for the dialogue is hardly acceptable.

You are hardly in the position to demand anything, and especially anything we strongly disagree with, while using, free of charge, the results of our labor and expertise.

-- Alex Tutubalin @LibRaw LLC

Perhaps I lost patience when

Perhaps I lost patience when you wouldn't explain in the first instance why you felt the change was necessary. If so my apologies for letting my impatience show up in the tone of my posts. I entirely accept you consider the change to be right, but please why won't you explain your logic to people like me who are left to pick the pieces after you make an imcompatible change. From my perspective the conversation went a bit like this:

Me: You broke it
You: Yes but you can comment out these lines to make it like it was
Me: Oh OK how about making this configurable by an ifdef
You: No that would break things
Me: Please explain what/why
You: Silence.

"Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum."

I wasn't dictating what you should do by and large, I was mostly making suggestions such as different CFA for the user area and the full image. But rather than engaging in a dialog, you provided bare minimum answers which just might be the reason that I became frustrated and made one or two somewhat snarky remarks.

TBH if you had explained the whys and wherefores I in the first instance I might well have gone away happy. Now not so much. I just don't get why you folks won't explain stuff.

David Partridge

X3F Tools

What does the integration of X3F Tools mean for LibRaw's handling of Foveon RAW? The Kalpanika/X3F Tools project is incomplete, and abandoned, with some glaring issues − the development team freely spoke of not understanding the X3F metadata. What advantages are you seeing with this development?
I use Affinity Photo for Full Spectrum and Infracolour photography as SIGMA's own SPP is too limited regards toolset. I vastly prefer the LibRaw conversion e.g. RawDigger's RGB export) over X3F Tools' and I'd hate to lose that.
P.S. X3F Tools' use of NLM denoising is unsuited to Foveon, but I presume that that's for the RAW converter to implement and isn't part of LibRaw's handling of X3Fs?
Thanks.

X3F Tools are still

X3F Tools are still integrated in LibRaw. To enable it one needs to add USE_X3FTOOLS define while building LibRaw from source. So, nothing has changed for Affinity or other apps that use LibRaw internally.

-- Alex Tutubalin @LibRaw LLC

> the development team freely

> the development team freely spoke of not understanding the X3F metadata

As if metadata is well-understood for any vendor ;) Run
exiftool -U
and see - a lot of fields don't even have names, and if the name is known, it doesn't mean we always know how to apply the field value.

--
Iliah Borg

X3F metadata

Well LibRaw has the best colour rendering of Merrill X3Fs outside of SPP, you appear to be the only team to correctly apply the colour matrices! Affinity Photo is fine for Full Spectrum and Infracolour but it's useless for visible light photos − White Balance issues. I just wish that RawDigger had a DNG export option, rather than just TIFF.
Cheers.
P.S. with SPP offering such little control, I'm searching for a viable alternative. I did experiment with doing a manual conversion, using ImageJ, but that was more of a learning exercise − Foveon RAW is amazingly straight forward (and the metadata not that cryptic), it's a shame that only Iridient Developer seems to have really tried.

I download v0.20 release for

I download v0.20 release for Windows.
When I try to convert GoPro GPR file I got message:
"Cannot open GPBK2696.GPR: Unsupported file format or not RAW file"
Is it possible work with this file with prebuild binaries or I must compile sources by myself?

LibRaw pre-compiled binaries

LibRaw pre-compiled binaries are built without 3rd party external components (GoPro SDK, Adobe DNG SDK, JPEG library).

So, yes, you need to compile sources yourself with GoPro SDK. See README.GoPro.txt for details.

-- Alex Tutubalin @LibRaw LLC

Pages