Jens
2010-08-12 13:28:56 UTC
Could the exact same result of, say, 10 ISO not be achieved by taking a photo at the sensor's ideal sensitivity (e.g. 200 for the sake of this argument), then dividing all measured values by 20 in the memory while the sensor takes another measurement without any delay in between, dividing its measured values by 20 again and adding them to the values of the already taken photo in memory.
Repeat this another 18 times and the sum of the values should be those of a normal exposure that took 20 times as long as one at 200 ISO, i.e. an exposure at effectively 10 ISO.
This way, one wouldn't need expensive ND filters, and due to the much higher number of photons that hit the sensor the measurements would likely be even more accurate than those of a photo taken with a ND filter with much fewer photons per unit of time.
So i'm wondering...why isn't it done that way? If there seems to be a simple solution to a problem that appears to elude professional engineers working in the field, then either all those engineers are idiots or one underestimates the complexity of the problem. The latter is much more likely, of course - but i'd like to understand the actual difficulties.
Could it be that the sensor is not instantly ready to take another measurement as the data of the previous has been read? Then again, recording video is possible, so achieving a frame rate of 25 or even 60 per second apparently is possible. So this shouldn't be a problem for a several second or even minutes exposure.