I tried the latest patch.
It imports the ORF file. But the colors are really terrible with the default settings. Do you suggest any special settings?
Original file: https://www.lemkesoft.org/temp/original.orf
Imported file: https://www.lemkesoft.org/temp/import_25percent.png
Just use Adobe DNG SDK:
- it is free
- it is BSD licensed
- it is capable to record industry-standard RAW files compatible with ANY raw-processing software.
(the only problem is lack of useable documentation but it is easy to grab a working sample code from github :)
Actually, I am in position similar to the OP's, so I would like to expand a bit on the subject:
- many industrial cameras can be interfaced using the Gige Vision protocol.
- this protocol is very low-level, and each frame is received with minimal meta-data.
- so storing the pixel values is mostly the programmer's responsibility.
- so every person facing such a camera ends up defining his/her RAW format.
- still, it would be good if the images produced could be read by LibRaw, and all the programs built around it.
- and it would hardly make any sense to add to LibRaw a routine for each file type produced by each version of each software written by each new person.
So the bottom line is:
- given LibRaw's developers experience with RAW file formats, could they point out an existing one that is already handled by LibRaw with which newcomers could comply with minimal hassle? (as the OP pointed out, DNG seems to have been designed just for that, but I have no idea how hard writing DNG files is).
ILCE-7M4 support will be published in the next public snapshot (or beta/release, if it comes first)
request a7iv (ILCE-7M4) raw support
I tried my EOS 80D's cr2 file and SONY a330's ARW file, both of them are working okay. But the ARW shoot by a7m4 does not work.
OM-1 supported by this latest patch: https://github.com/LibRaw/LibRaw/commit/adcb898a00746c8aa886eb06cc9f5a1c...
baseline exposure tag for DNG is parsed into imgdata.color.dng_levels.baseline_exposure (and also into tiff_ifds[ifd].dng_levels.....)
Sorry if I wasn't clear on this - by correct scale I mean every image is scaled by the same factor so that 1 pixel intensity measures the same amount of physical light energy for all images.
After doing some research I found each file has a different value for "baseline exposure", and to convert them back to the same scale one needs to apply following correction:
intensity = raw_intensity * (2^ baseline_exposure)
After this conversion intensities are now consistent across images.
I still have a few questions though:
- Does libraw expose "baseline_exposure" through API?
- I think above correction should be applied to post processed image, is that correct?
I did not quite understand what is meant by the 'correct scale' in this context.
DNG tag 0xc61d values are extracted into
1) imgdata.color.maximum (for channel 0)
2) imgdata.color.dng_levels.dng_whitelevel[channel] (for selected DNG ifd)
3) tiff_ifd[ifd].dng_levels.dng_whitelevel[channel] (all IFDs)
Thank you so much for the quick reply. When I opened both files in photoshop and the preview app they were shown in the correct scale. Is there any way I can fetch that information from the DNG with libraw?
Your DNG files have data range of 65535:
| | 15) WhiteLevel = 65535 65535 65535
So, 65535 is an expected data range even without scaling.
There is no need to modify Makefile.msvc for 64-bit.
Just run nmake in corresponding (32 or 64 bit) Developer Shell (or use vcvarsNN.bat to set up environment variables)
We also provide LibRaw.sln (one may need to change tools version to Visual Studio used)
Already in github/master: https://github.com/LibRaw/LibRaw/commit/adcb898a00746c8aa886eb06cc9f5a1c...
Could we have support for OM-1 OM System (Olympus) soon?
Thank you! :) I get matching values after updating libraw to the current git master.
A7C (ILCE-7C) is not officially supported by 0.20.2, consider upgrade to current 'public snapshot' from github: https://github.com/LibRaw/LibRaw
There is now more info on darktable's side: https://github.com/darktable-org/rawspeed/pull/250#issuecomment-1077723895
What version of LibRaw do you use?
Thank you all for the answers. I read the binary with native C++ fstream (800x800 = 640000 values). then used __builtin_bswap16() to swap and obtain my bayer data which I was able to convert to RGB,
procflags of open_bayer() should also do the swap for you in the above example: https://www.libraw.org/docs/API-CXX.html#open_bayer
For byte swapping use something like ntohs() (actually this converts from big endian to your machine/host endianness, so it might be a noop if your machine is big endian as well) or, depending on your system/compiler, some incarnation of the bswap16() intrinsic/bultin function (e.g. __builtin_bswap16() for GCC, _byteswap_ushort() for MSVC, etc.)
Thank you again for your answers.
Actually, yes the data is encoded in int16.
I was able to read it in python swap bit ordering to bi endian. Any idea how I can achieve this in c++ ?
Also my guess for taking 2 bites with go for two consecutive bytes. Does this seam reasonable. I don' have all format specifications in hand.
Also, if your file is 2 bytes per pixel, uncompressed, you do not need LibRaw to read source data values. Your values are already here (in unsigned short format, probably you may need big/little-endian swap; I know nothing about your images and your camera)
If your input data is 16 bit (2 bytes per pixel), 1,280,000 bytes is expected....
Thank you Alex for your answer. I tried with open_bayer as I think the raw file has no metadata. With the code bellow I could see that readb as 1280000 values instead of 640000 for an image 800x8000. The final result with dcraw_process is not satisfactory. I wonder If and how I could retrieve the unpacked values and do the conversion myself.
FILE *in = fopen(fname, "rb"); fseek(in, 0, SEEK_END); unsigned fsz = ftell(in); unsigned char *buffer = (unsigned char *)malloc(fsz); if (!buffer) return 2; fseek(in, 0, SEEK_SET); unsigned readb = fread(buffer, 1, fsz, in); if (readb != fsz) return 3; std::cout << "readb" << readb << std::endl; LibRaw rp; rp.imgdata.params.output_tiff = 1; int ret = rp.open_bayer(buffer, fsz, 800, 800, 0, 0, 0, 0, 0, LIBRAW_OPENBAYER_BGGR, 0, 0, 0); if (ret != LIBRAW_SUCCESS) return 4; if ((ret = rp.unpack()) != LIBRAW_SUCCESS) printf("Unpack error: %d\n", ret); if ((ret = rp.dcraw_process()) != LIBRAW_SUCCESS) printf("Processing error: %d\n", ret); char outfn; sprintf(outfn, "%s.tif", fname); if (LIBRAW_SUCCESS != (ret = rp.dcraw_ppm_tiff_writer(outfn))) printf("Cannot write %s: %s\n", outfn, libraw_strerror(ret)); else printf("Created %s\n", outfn); ret = rp.dcraw_process(); // Check for error using LIBRAW_SUCCESS. I never get an error here ret = rp.dcraw_ppm_tiff_writer(fname);