Are you saying we are past that misunderstanding? My daily e-mails show direct opposite.
> he issue is that neither cameras not raw processing tools give a convenient method to see the (non white balanced) usage of the three raw sensor channels
The issue is not white balance only, but arbitrary statement of camera sensitivity as well. And channels are four, not three. Acting in presumption that G1=G2 is wrong.
> A solution needs to start outside the camera (and may be adopted, in time, in camera).
Real solution needs to start with the camera as we have less and less unprocessed data in what is commonly referred to as "raw output". Out of camera solutions are on many occasions just crunches as they can't take into account the modifications of data done in camera before the raw is recorded to a memory card.
So concretely, and irrespective of whether people have misunderstood ETTR in the past - the issue is that neither cameras not raw processing tools give a convenient method to see the (non white balanced) usage of the three raw sensor channels.
What can be done about this? A solution needs to start outside the camera (and may be adopted, in time, in camera).
> So how about posting a link to the images used for this article.
Those are thousands I discussed myself, and the point photographers were making was always "we are following ETTR", or, more specifically, often pointing to that article on Reichmann's site (and yes, his name is spelled with two "n" at the end). The images come from the everyday practice of pre-press, from different photosites, forums, etc. At some point in one pre-press bureau we were to arrange a practical shooting session to dispel the myth created by the phrase you quoted, "...bias your exposures so that the histogram is snugged up to the right, but not to the point that the highlights are blown." The idea that this note was written based on one image taken by one person is totally wrong.
The other problem with the ETTR as per LL approach is that it does not take into account how the camera actually meters, and how much negative compensation is already applied before displaying the histogram. For different cameras it is between 0.5 and 1 eV negative.
Lab is definitely not the best choice. On the other side, we need some widely recognized color space for image storage/exchange. From this point, Lab is better choice (RGB and CMYK are imperfect, other color spaces are uncommon). L axis of Lab is good enough.
AFAIK, there is no 'digitizable' perceptually uniform color spaces. Munsell scale is good, but there is no easy way to calculate (perceptually) intermediate color values.
Actually, I'm not asking to look at many underexposed images, only the image used in the writing of this article.
I see underexposed, overexposed and correctly exposed images all the time and rarely do people point me to Reichman's artcle. In my experience, on balance most people who discuss the concept of ETTR have never read Reichman's article and are restating second and third hand information.
So how aobut posting a link to the images used for this article. If the information and position stands up, then it can only help.
One thing that I would point out is that human vision is a tricky beast and it may be difficult to draw the right conclusions out of empirical results. There are many, many things going on at once... we can get a sense of that from the wide variety of visual illusions out there.
---
In the grand scheme of things, I think that the psychology of the end user also plays a huge role. e.g. in premium audio cable, there are 'snake oil' products out there where people buy them because their expectations about the price (expensive=good, etc.) trump their actual perception!!
When looking at a photograph, there are noticeable differences between the reproduced image and real life. e.g. not 3-D, dynamic range cannot reproduce specular highlights, artifacts, etc. etc. We can tell whether we just looked at a photograph or through a window. But for the most part, people forgive that and don't pay attention to the technical flaws/shortcomings.
I think if photographers want to make truly ""realistic"" photos, there is art involved in the photographer tweaking the controls manually to look right. Part of this is to capture the 'signal processing' done by our brain. And sometimes part of it is to aim for naturalism... where the image looks like what it should look like.
To clarify what I mean by naturalism... in audio, we expect gunshots to sound like what they sound like in the movies (they have a certain roar from the compression effects applied to them). Yet actual recordings of gunshots sound very different.
The options are: use Auto White balance in NX (White Balance: Set Colour Temperature: New WB: Calculate automatically) as a first step and adjust from there; use click/drag grey white balancing method (White Balance: Set Grey Point); select the type of light in the scene and adjust colour temperature (for example - White Balance: Set Colour Temperature: New WB: Daylight: Cloudy); take a separate shot with custom white balance and copy in NX white balance from it to the shots taken with UniWB.
Hi, Having set my D3 up with UniWB. How do I then correctly process in NX2 the images capatured with UniWB so I get an image with visually appropriate White balance? Txs Mike
> Is there much image degradation if I were to use a gel (rather than glass filter) over my lens? I happened to settled upon a Roscoe Pink (4830) for my D80. After evaluating a sample set of Roscoe swatches alongside UniWB. I found that the Pink resulted in the least amount of Red and Blue WB adjustment for the D80 with flash as the light source.
When using flash as the only light source I always place the filter in front of the flash, not in front of the lens. This way no image degradation occurs.
If I need to use colour gel filters outdoors then I prefer Lee filters. When used with a deep compendium they cause image degradation on par with a regular protection filter.
LibRaw is based on dcraw sources. In turn, dcraw uses RGB-XYZ tables from Adobe (DNG converter? Or camera raw?). So, do not expect to get good color on digikam/LibRaw until dcraw and-or Adobe converter arrives
The problem is with Adobe slow release of releases. And I bet this will remain a problem every year after photokina. New cameras will be available in the market and Adobe will pick and choose which one to support when.
An open source alternative to get to DNG would be very helpful.
Right now, I have Panasonic LX3 and its not supported. The same goes for G10 and I am sure there are lot more.
And unfortunately, a lot of ppl( including me) are tied to Adobe Photoshop.
Any help to get digiKam running on windows would be appreciated.
thanks for your reply. I have more general questions, if I may pick your brain for more info...
Is there much image degradation if I were to use a gel (rather than glass filter) over my lens? I happened to settled upon a Roscoe Pink (4830) for my D80. After evaluating a sample set of Roscoe swatches alongside UniWB. I found that the Pink resulted in the least amount of Red and Blue WB adjustment for the D80 with flash as the light source. I never really use the filter, but I was so intrigued by all the posts that I read on dpreview that I had to try it out for myself as an academic exercise.
At the moment, I'm using NX 2.1 for raw conversion; I finally decided to give it a shot after using NX 1.3 for a while. The workflow of NX 2 still isn't very good, but I like the auto color aberration capability, and I just noticed that the WB problem (aka FixNEF) is now resolved.
As a side note, I don't have a Mac... otherwise I would use RPP. I was able to load OSX on my PC, but I had a weird issue with OSX changing my system clock such that it was 12 hours off. Anyway, RPP seemed very nice (evaluated 3 months ago), but I was getting some weird demosaicing artifacts (zig zags, if I recall, with AHD). I noticed that there is new multiprocessor capability with the latest version of RPP, so I may have to give it a shot again.
Very useful. Exceptional noise handling does not mean the exposure can be wrong and the result will be the same as with the correct exposure. I also prefer to know when one channel is truly blown out. I rarely use D3 at base ISO (most of the exceptions are wide angle shots); but when I do I use a gel to balance the sensitivity of the channels if shooting conditions permit. It never hurts to get the per channel exposure correct at the time of shooting instead of postprocessing for correct colour and exposure ;)
How useful is UniWB with the D3? Considering that the D3 is supposed to have exceptional noise handling, do you use UniWB in conjunction with a colored filter (over flash or over lens)? Or do you just use UniWB for histogram evaluation only?
The point in using magenta filter is to achieve better exposure of red and blue channels by allowing less light to hit the sensors under the green filters of sensor CFA. This is physical, pre-capture filtration; and as such it can't be reproduced by tuning the camera or through post-processing.
See below a thread about a stand alone version of digiKam DNG Converter tool running under Windows.
http://www.digikam.org/drupal/node/378#comment-17895
Gilles Caulier
> people have misunderstood ETTR in the past
Are you saying we are past that misunderstanding? My daily e-mails show direct opposite.
> he issue is that neither cameras not raw processing tools give a convenient method to see the (non white balanced) usage of the three raw sensor channels
The issue is not white balance only, but arbitrary statement of camera sensitivity as well. And channels are four, not three. Acting in presumption that G1=G2 is wrong.
> A solution needs to start outside the camera (and may be adopted, in time, in camera).
Real solution needs to start with the camera as we have less and less unprocessed data in what is commonly referred to as "raw output". Out of camera solutions are on many occasions just crunches as they can't take into account the modifications of data done in camera before the raw is recorded to a memory card.
So concretely, and irrespective of whether people have misunderstood ETTR in the past - the issue is that neither cameras not raw processing tools give a convenient method to see the (non white balanced) usage of the three raw sensor channels.
What can be done about this? A solution needs to start outside the camera (and may be adopted, in time, in camera).
> So how about posting a link to the images used for this article.
Those are thousands I discussed myself, and the point photographers were making was always "we are following ETTR", or, more specifically, often pointing to that article on Reichmann's site (and yes, his name is spelled with two "n" at the end). The images come from the everyday practice of pre-press, from different photosites, forums, etc. At some point in one pre-press bureau we were to arrange a practical shooting session to dispel the myth created by the phrase you quoted, "...bias your exposures so that the histogram is snugged up to the right, but not to the point that the highlights are blown." The idea that this note was written based on one image taken by one person is totally wrong.
The other problem with the ETTR as per LL approach is that it does not take into account how the camera actually meters, and how much negative compensation is already applied before displaying the histogram. For different cameras it is between 0.5 and 1 eV negative.
Dear Glenn, I'm afraid Lab was not designed to be perceptually uniform, but to be deltaE uniform.
Glenn,
Lab is definitely not the best choice. On the other side, we need some widely recognized color space for image storage/exchange. From this point, Lab is better choice (RGB and CMYK are imperfect, other color spaces are uncommon). L axis of Lab is good enough.
AFAIK, there is no 'digitizable' perceptually uniform color spaces. Munsell scale is good, but there is no easy way to calculate (perceptually) intermediate color values.
Actually, I'm not asking to look at many underexposed images, only the image used in the writing of this article.
I see underexposed, overexposed and correctly exposed images all the time and rarely do people point me to Reichman's artcle. In my experience, on balance most people who discuss the concept of ETTR have never read Reichman's article and are restating second and third hand information.
So how aobut posting a link to the images used for this article. If the information and position stands up, then it can only help.
Regards,
Peter
Interesting reading.
One thing that I would point out is that human vision is a tricky beast and it may be difficult to draw the right conclusions out of empirical results. There are many, many things going on at once... we can get a sense of that from the wide variety of visual illusions out there.
I've found myself that a lot of the theories out there are wrong... e.g.
http://colorcorrection.info/color-science/hermann-grid-illusion-nobody-k...
As far as building a RAW converter goes, some of the color models like LAB might not necessarily be what photographers want.
http://colorcorrection.info/color-science/is-lab-useful-for-color-correc...
---
In the grand scheme of things, I think that the psychology of the end user also plays a huge role. e.g. in premium audio cable, there are 'snake oil' products out there where people buy them because their expectations about the price (expensive=good, etc.) trump their actual perception!!
When looking at a photograph, there are noticeable differences between the reproduced image and real life. e.g. not 3-D, dynamic range cannot reproduce specular highlights, artifacts, etc. etc. We can tell whether we just looked at a photograph or through a window. But for the most part, people forgive that and don't pay attention to the technical flaws/shortcomings.
I think if photographers want to make truly ""realistic"" photos, there is art involved in the photographer tweaking the controls manually to look right. Part of this is to capture the 'signal processing' done by our brain. And sometimes part of it is to aim for naturalism... where the image looks like what it should look like.
To clarify what I mean by naturalism... in audio, we expect gunshots to sound like what they sound like in the movies (they have a certain roar from the compression effects applied to them). Yet actual recordings of gunshots sound very different.
The options are: use Auto White balance in NX (White Balance: Set Colour Temperature: New WB: Calculate automatically) as a first step and adjust from there; use click/drag grey white balancing method (White Balance: Set Grey Point); select the type of light in the scene and adjust colour temperature (for example - White Balance: Set Colour Temperature: New WB: Daylight: Cloudy); take a separate shot with custom white balance and copy in NX white balance from it to the shots taken with UniWB.
Hi, Having set my D3 up with UniWB. How do I then correctly process in NX2 the images capatured with UniWB so I get an image with visually appropriate White balance? Txs Mike
half_mt is just a simple(!) sample.
Anyway, to be fixed in next release
If one of the threads in half_mt encounters an error processing a file it quits but never restarts. Remaining threads, however, continue to function.
> Is there much image degradation if I were to use a gel (rather than glass filter) over my lens? I happened to settled upon a Roscoe Pink (4830) for my D80. After evaluating a sample set of Roscoe swatches alongside UniWB. I found that the Pink resulted in the least amount of Red and Blue WB adjustment for the D80 with flash as the light source.
When using flash as the only light source I always place the filter in front of the flash, not in front of the lens. This way no image degradation occurs.
If I need to use colour gel filters outdoors then I prefer Lee filters. When used with a deep compendium they cause image degradation on par with a regular protection filter.
LibRaw is based on dcraw sources. In turn, dcraw uses RGB-XYZ tables from Adobe (DNG converter? Or camera raw?). So, do not expect to get good color on digikam/LibRaw until dcraw and-or Adobe converter arrives
The problem is with Adobe slow release of releases. And I bet this will remain a problem every year after photokina. New cameras will be available in the market and Adobe will pick and choose which one to support when.
An open source alternative to get to DNG would be very helpful.
Right now, I have Panasonic LX3 and its not supported. The same goes for G10 and I am sure there are lot more.
And unfortunately, a lot of ppl( including me) are tied to Adobe Photoshop.
Any help to get digiKam running on windows would be appreciated.
Thanks
Do you really want DNG converter under Windows? AFAIK, Adobe DNG Converter is free...
I have been trying to find a windows port of digikam and it seems like it's not there.
Is it possible to get digiKam install for windows.
Are there any other applications that user LibRaw to convert Raw files to DNG?
Thanks
There is free/opensource DNG converter, based on LibRaw:
http://www.digikam.org/drupal/node/373
Hi there,
Will it be possible for you to provide some pointers on how to use LibRaw to get a DNG output.
Raw documentation is very sparse and its very hard to understand anything from DCRaw code.
Thanks,
thanks for your reply. I have more general questions, if I may pick your brain for more info...
Is there much image degradation if I were to use a gel (rather than glass filter) over my lens? I happened to settled upon a Roscoe Pink (4830) for my D80. After evaluating a sample set of Roscoe swatches alongside UniWB. I found that the Pink resulted in the least amount of Red and Blue WB adjustment for the D80 with flash as the light source. I never really use the filter, but I was so intrigued by all the posts that I read on dpreview that I had to try it out for myself as an academic exercise.
At the moment, I'm using NX 2.1 for raw conversion; I finally decided to give it a shot after using NX 1.3 for a while. The workflow of NX 2 still isn't very good, but I like the auto color aberration capability, and I just noticed that the WB problem (aka FixNEF) is now resolved.
As a side note, I don't have a Mac... otherwise I would use RPP. I was able to load OSX on my PC, but I had a weird issue with OSX changing my system clock such that it was 12 hours off. Anyway, RPP seemed very nice (evaluated 3 months ago), but I was getting some weird demosaicing artifacts (zig zags, if I recall, with AHD). I noticed that there is new multiprocessor capability with the latest version of RPP, so I may have to give it a shot again.
regards / roy
Very useful. Exceptional noise handling does not mean the exposure can be wrong and the result will be the same as with the correct exposure. I also prefer to know when one channel is truly blown out. I rarely use D3 at base ISO (most of the exceptions are wide angle shots); but when I do I use a gel to balance the sensitivity of the channels if shooting conditions permit. It never hurts to get the per channel exposure correct at the time of shooting instead of postprocessing for correct colour and exposure ;)
How useful is UniWB with the D3? Considering that the D3 is supposed to have exceptional noise handling, do you use UniWB in conjunction with a colored filter (over flash or over lens)? Or do you just use UniWB for histogram evaluation only?
regards / roy
both mk2
Which teleconverters were used? Canon EF? Mk I or II?
The point in using magenta filter is to achieve better exposure of red and blue channels by allowing less light to hit the sensors under the green filters of sensor CFA. This is physical, pre-capture filtration; and as such it can't be reproduced by tuning the camera or through post-processing.
Pages