Add new comment

... and this red thermos.

Thanks for the quick response!

For this project, I need three things from the raw data: image height, image width, and the bitmap. There's already C calls for get_raw_height and get_raw_width -- all I need past that is the raw bitmap.

Black/white level, bayer pattern, color data, and white balance data are all needed if you need to reconstruct a full color original image-- and for that, yes, dcraw_process() is absolutely the right thing to call, because it manipulates the bitmap to reconstruct the image. This app don't need that level of processing. I also tried using raw2image, and even that appears to do some light debayering work and modifying the base bitmap.

This project needs the raw, unmodified bitmap straight off the camera sensor. It doesn't need any of the white balance data or color information, nor is it trying to recreate the full color picture. However, what's critical is the turnaround time between shutter snaps- in between shutter snaps, the app needs to extract the raw image data, review it for just a couple things, make decisions, and get back to the business of taking pictures as quickly as it can.

The first pass at implementing libraw invoked the unprocessed_raw utility (using the -T switch) to convert the bitstream to a TIFF, and it does exactly what was needed. However, it also adds the overhead of writing to/reading from the disk twice, as well as wrapping the bitmap in a TIFF wrapper.

Ideally the app should get the image from the camera, extract the things it needs from the raw data in memory, and pass everything around in memory buffers. Adding that one routine to the API would streamline the entire process, and make that capability available for anyone else as well.