Dynamic Range Question

Messages
24,281
Name
Toby
Edit My Images
No
I've always understood that full frame sensors generally have better dynamic range than crop sensors and always thought it was due to the larger pixle pitch, however you are now getting high mp sensors 50-60mp that have better dynamic range than 24mp APS-C which will have similar sized pixel pitches. Also, I have just been on photons to photos and my camera has a lower dynamic range in crop mode than full frame mode yet the pixel pitch is obviously the same.

So why is it that crop sensors have a better DR, and why does the DR change on the same camera between full res and crop mode?
 
Toby, there is a wealth of material on the internet about all of this, but you have to make sure that you are comparing apples with apples as different generation sensors have different performance. More modern sensors effectively have deeper light gathering wells (even though they might have a higher pixel density), but overall light captured and its effects on noise have a lot to play.

A simple answer to your FF v Crop question is noise - take a look at this - there is a section further down the page on effects of sensor size

 
Dynamic range is only the difference between the minimum and the maximum light level measurable/recordable. Early on the minimum was nearly the same due to the camera's inherent noise floor (i.e. noisy ADC, lower "ISO performance"); so larger photosites had an advantage because they have a greater maximum capacity and expand the dynamic range farther into the brighter regions. However, that is/was an "engineering" consideration and only true when taken as a stand-alone factor.

There was also the consideration of fill efficiency. More photosites meant more traces between them and more gaps between the micro lenses. However, with modern back-illuminated sensors, "gapless" lens arrays, and other technological advancements (miniaturization), those factors have been almost entirely eliminated.

And now, with modern ISO-invariant (extremely low noise) cameras the minimum measurable/recordable is no longer nearly the same. And smaller photosite is actually more sensitive to light; so now smaller photosites increase dynamic range by expanding it into the darker regions.

Note that the "larger"/"smaller" is in relation to the photosite's capacitance (full well capacity). That can be increased by making the photosite wider (larger), or by adding a secondary capacitor to it (deeper). Modern "dual gain" sensors have a secondary capacitor in parallel with the photosite (which is also acting as a capacitor). This increases it's full well capacity in the low conversion gain state for brighter light scenarios. And it disconnects the secondary capacitor for increased sensitivity in the high conversion gain state.

Now, looking at the "full picture", dynamic range (and noise) is really about light per area. And when looking at an image you are not looking at individual pixels; you are looking at an area of pixels used to record an area of light. I.e. smaller pixels get less each, but the total/area remains the same. And if you crop a scene/image (crop sensor, crop mode, crop in post) you are discarding area, which is discarding light (potential), which increases noise and reduces dynamic range.
 
Excellent answer Steven, and of course the improved technology is the (sensible) reason for people to upgrade their cameras 20 years before they wear out:)

But although technology can help us, it doesn't and can't beat stupid . . .
I saw a news item on the TV last night, a register office celebrated 100 years of existence by marrying 100 couples in one day. The clip briefly showed a "photographer", complete with a state-of-the-art mirrorless camera, taking photos of a pair of victims bride and groom walking towards her, the camera was in machine gun mode, complete with a flashgun in the hotshoe, which was somehow keeping up with the high frame rate, so must have been on extremely low power.

The flashgun was fitted with one of those diffusers that spread the light and reduce the power and it was pointing straight up at the sky, no doubt trying to soften the shadows by bouncing off the rain clouds above.
 
Toby, there is a wealth of material on the internet about all of this, but you have to make sure that you are comparing apples with apples as different generation sensors have different performance. More modern sensors effectively have deeper light gathering wells (even though they might have a higher pixel density), but overall light captured and its effects on noise have a lot to play.

A simple answer to your FF v Crop question is noise - take a look at this - there is a section further down the page on effects of sensor size

Thanks, I did of course search t'internet beforehand but deciphering fact from fiction is not so easy ;) I shall take a look at that article. My initial question regarding noise is whether we are talking the pixel level or not, i.e. if looking at the pixel level in crop mode or FF mode it should look the same (in my mind), hopefully that article addresses this.
Dynamic range is only the difference between the minimum and the maximum light level measurable/recordable. Early on the minimum was nearly the same due to the camera's inherent noise floor (i.e. noisy ADC, lower "ISO performance"); so larger photosites had an advantage because they have a greater maximum capacity and expand the dynamic range farther into the brighter regions. However, that is/was an "engineering" consideration and only true when taken as a stand-alone factor.

There was also the consideration of fill efficiency. More photosites meant more traces between them and more gaps between the micro lenses. However, with modern back-illuminated sensors, "gapless" lens arrays, and other technological advancements (miniaturization), those factors have been almost entirely eliminated.

And now, with modern ISO-invariant (extremely low noise) cameras the minimum measurable/recordable is no longer nearly the same. And smaller photosite is actually more sensitive to light; so now smaller photosites increase dynamic range by expanding it into the darker regions.

Note that the "larger"/"smaller" is in relation to the photosite's capacitance (full well capacity). That can be increased by making the photosite wider (larger), or by adding a secondary capacitor to it (deeper). Modern "dual gain" sensors have a secondary capacitor in parallel with the photosite (which is also acting as a capacitor). This increases it's full well capacity in the low conversion gain state for brighter light scenarios. And it disconnects the secondary capacitor for increased sensitivity in the high conversion gain state.

Now, looking at the "full picture", dynamic range (and noise) is really about light per area. And when looking at an image you are not looking at individual pixels; you are looking at an area of pixels used to record an area of light. I.e. smaller pixels get less each, but the total/area remains the same. And if you crop a scene/image (crop sensor, crop mode, crop in post) you are discarding area, which is discarding light (potential), which increases noise and reduces dynamic range.
Thanks, it's this last bit that confuses me. As I've mentioned above, if you're looking at the pixel level then a 100% crop of the FF image would look exactly the same as the 100% crop of the crop image wouldn't it?
 
Last edited:
Excellent answer Steven, and of course the improved technology is the (sensible) reason for people to upgrade their cameras 20 years before they wear out:)

But although technology can help us, it doesn't and can't beat stupid . . .
I saw a news item on the TV last night, a register office celebrated 100 years of existence by marrying 100 couples in one day. The clip briefly showed a "photographer", complete with a state-of-the-art mirrorless camera, taking photos of a pair of victims bride and groom walking towards her, the camera was in machine gun mode, complete with a flashgun in the hotshoe, which was somehow keeping up with the high frame rate, so must have been on extremely low power.

The flashgun was fitted with one of those diffusers that spread the light and reduce the power and it was pointing straight up at the sky, no doubt trying to soften the shadows by bouncing off the rain clouds above.
That is something that puzzled me for a while. But what a flash does do used like that is put a small catch light in their eyes but probably only visible at closer distances.
Back in the day I always used fill flash, i had the most powerful and quick charging professional Braun flash which could manage a 2 second recharge at full power or virtually instant at 1/ 4 power, but no motor drive on Rolleiflex, so it could easily keep up with a normal wedding shoot. The flash used a 6 Volt sealed lead acid battery. And a spare.
 
That is something that puzzled me for a while. But what a flash does do used like that is put a small catch light in their eyes but probably only visible at closer distances.
Back in the day I always used fill flash, i had the most powerful and quick charging professional Braun flash which could manage a 2 second recharge at full power or virtually instant at 1/ 4 power, but no motor drive on Rolleiflex, so it could easily keep up with a normal wedding shoot. The flash used a 6 Volt sealed lead acid battery. And a spare.
I had one of those, until the battery died - a bit like the early mobile phones, wore it on a strap over my shoulder, early 60's from memory . . .

But I don't think that that flash could have done anything in terms of a catchlight, far too far away and at very low power
 
Thanks, it's this last bit that confuses me. As I've mentioned above, if you're looking at the pixel level then a 100% crop of the FF image would look exactly the same as the 100% crop of the crop image wouldn't it?
If you are comparing the "same portion" of an image, then the DR is not affected.

But if you are comparing the same output/result/image, then the DR is decreased by cropping. It is basically the inverse square law in effect and the main reason why *larger sensors have better low light performance.

Say you want to take a picture of a light bulb, and you want it to be a certain size within the composition. In order to do that with a larger image area (larger sensor/less crop) using the same exposure settings you either have to get closer to it, or you have to use a longer lens with a larger entrance pupil (same f#). Both methods increase the light density/amount of light taken in, which compensates for it being spread over a physically larger area keeping the exposure the same. So while the "exposure" is the same, the physically larger image of the light bulb actually contains/received more light... more light results in less noise and more dynamic range (measurable signal above noise) in the image; and in sensor as a whole, which is what is being measured (camera performance, not photosite performance).


(*not necessarily larger photosites)
 
Last edited:
But what a flash does do used like that is put a small catch light in their eyes but probably only visible at closer distances.
Fixed it for you...
There is absolutely no benefit to pointing it any direction other than directly at the subject if there is nothing for the light to bounce off of; because only the light going directly towards the subject is going to reach there. And typically, the diffusion dome's largest face is the front side; which would provide the largest catchlight.

But I don't think that that flash could have done anything in terms of a catchlight, far too far away and at very low power
It only needs to be brighter than the surrounding area in order to create a brighter reflection (catchlight). But almost certainly pointless (tiny, center of eye, weak).
 
Last edited:
The dynamic range of a modern digital camera is more than sufficient to capture almost bright light situations. And more than enough for a misty landscape.
The days when this was a real problem is long gone. Capturing a full tonal range image and compressing it down to produce a large print. Has always had the danger of producing a very dull image. to produce an image with plenty of tone separation well defined dark shadows and highlight sparkle has always been a challenge. And will always require skill and compromise in both exposure and lighting and an understanding of suitable post processing.

However even crop sensors used to capture a 360 pan using a carefully selected fixed exposure can record a sufficient tonal range to produced an excellent finished pan.
This by its nature includes both with, and against the light segments, of profoundly different light levels.

Things will undoubtedly get even better in terms of dynamic range in the future which will leave more choices at the PP stage. But things are certainly good enough for most work now. We will never need a sensor that can do more than capture a few photons at the bottom end and the equivalent of details in a sunlit bright white shirt at the other.
When we can do that sensors will be truly invariant.
 
If you are comparing the "same portion" of an image, then the DR is not affected.

But if you are comparing the same output/result/image, then the DR is decreased by cropping. It is basically the inverse square law in effect and the main reason why *larger sensors have better low light performance.

Say you want to take a picture of a light bulb, and you want it to be a certain size within the composition. In order to do that with a larger image area (larger sensor/less crop) using the same exposure settings you either have to get closer to it, or you have to use a longer lens with a larger entrance pupil (same f#). Both methods increase the light density/amount of light taken in, which compensates for it being spread over a physically larger area keeping the exposure the same. So while the "exposure" is the same, the physically larger image of the light bulb actually contains/received more light... more light results in less noise and more dynamic range (measurable signal above noise).


(*not necessarily larger photosites)

What is important is the incident light captured on a given area at the same aperture. The crop size makes no difference.
At a given aperture the intensity of the light at the sensor is the same what ever the size of the sensor.
T STOPS are more accurate in this than standard still photography stops.
 
The crop size makes no difference.
Of course it does... "crop size" is the same thing as sensor size, negative size, physical size, and required enlargement for a given output.

And if you start with a larger light source (closer, more magnification w/ larger entrance pupil, etc) then you are starting with more light, and recording/receiving more incident light. The exposure remains the same because the light/area (density) is the same; but it is more area of that light density. Same effect as using a larger softbox at the same effective brightness (higher power setting), or moving closer to the softbox... it's the ISL.
 
Last edited:
If you are comparing the "same portion" of an image, then the DR is not affected.

But if you are comparing the same output/result/image, then the DR is decreased by cropping. It is basically the inverse square law in effect and the main reason why *larger sensors have better low light performance.

Say you want to take a picture of a light bulb, and you want it to be a certain size within the composition. In order to do that with a larger image area (larger sensor/less crop) using the same exposure settings you either have to get closer to it, or you have to use a longer lens with a larger entrance pupil (same f#). Both methods increase the light density/amount of light taken in, which compensates for it being spread over a physically larger area keeping the exposure the same. So while the "exposure" is the same, the physically larger image of the light bulb actually contains/received more light... more light results in less noise and more dynamic range (measurable signal above noise).


(*not necessarily larger photosites)
Thanks, the first bit in bold makes sense, and how my brain is thinking. I kind of understand your second bit in terms of something like your example, however when it comes to something like a landscape it then falls apart again for me.

For example, if I'm taking a landsape on a FF camera and also an APS-C camera using the same effective focal length then the cameras are going to be in the same position, therefore one is not closer to the light than the other. I understand that the larger sensor gathers more light overall simply because it is bigger, but each section of both sensors captures the same amount of light.

To try an illustrate what I mean, if you assume rain is light and buckets are the photosites. If you have a FF sensor that is made up of a grid of 8 buckets wide and 6 buckets high and expose it to rain you may find that each bucket has collected roughly 5 litres each, 240 litres in total. If you remove the outer 'circle' of buckets you'd be left with a grid of 6 buckets wide and 4 buckets high. Whilst the total water is now 'only' 120 litres each bucket still has 5 litres in it, so each bucket has been exposed to the same amount of rain (light). This to me suggests that the shot noise of each 'bucket' would therefore be the same, and therefore dynamic range would be the same. I don't see what influence the removed outer buckets would have on the inner ones?

I'm clearly missing something in my logic, however I followed a link from the link David sent me above and it seems to confirm my suspicions.


In this link they're trying to demonstrate that a larger sensor has less noise, but there example shows otherwise. They themselves say there's not much difference initially however if they then downsample the larger sensor there's less noise. This suggests to me that it is not then necssarily true that larger sensors are better at noise handling (and DR) just becasue they are larger per se but it is either because they have bigger photosites and/or downsampled more for viewing.

I'm also still struggling to see why noise affects dynamic range. Reading the link that David posted about shot noise suggests that not all pixels are subject to noise, or at least the same degree, therefore in my head at least some pixles are subject to the brightest brights and some to the darkest darks and therefore have the same dynamic range. Yes a larger sensor may have more of the brightest brights and more of the darkest darks but AFAIK DR is not a measure of how mcuh brightness/darkness there is but simply the difference between the bright and dark parts.
 
Last edited:
Of course it does... "crop size" is the same thing as sensor size, negative size, physical size, and required enlargement for a given output.

And if you start with a larger light source (closer, more magnification w/ larger entrance pupil, etc) then you are starting with more light, and recording/receiving more incident light. The exposure remains the same because the light/area (density) is the same; but it is more area of that light density. Same effect as using a larger softbox at the same effective brightness (higher power setting), or moving closer to the softbox... it's the ISL.

Final magnification is an entirely separate issue that the sensor is totally ignorant of.....and only happens later.
The intensity of the light and exposure remains the same.

If you cut a processed 10x8 negative into four 5x4 negatives they will all have had the same exposure. Format is not an issue to.
exposure. You give the same exposure what ever the format. The intensity at the sensor/ Film surface rremaains the same.

It is the intensity that affects noise etc. not the total light in the system. The total light ís not a function of exposure..
 
Final magnification is an entirely separate issue that the sensor is totally ignorant of.....and only happens later.
This is another question, we all know in general that larger sensor provide sharper images and I was always led to believe that it was due to having to enlarge images from a larger sensor less for your final display. However, someone argued a while ago that this is not true with digital photography as you don't 'enlarge' an image anymore as one pixel from the sensor equates to one pixel on your display, i.e. it's a transfer of date to light up a pixel on your display rather than enlarging the image. If that's true then why do larger sensors generally provide sharper images?
 
Thanks, the first bit in bold makes sense, and how my brain is thinking. I kind of understand your second bit in terms of something like your example, however when it comes to something like a landscape it then falls apart again for me.

For example, if I'm taking a landsape on a FF camera and also an APS-C camera using the same effective focal length then the cameras are going to be in the same position, therefore one is not closer to the light than the other. I understand that the larger sensor gathers more light overall simply because it is bigger, but each section of both sensors captures the same amount of light.

To try an illustrate what I mean, if you assume rain is light and buckets are the photosites. If you have a FF sensor that is made up of a grid of 8 buckets wide and 6 buckets high and expose it to rain you may find that each bucket has collected roughly 5 litres each, 240 litres in total. If you remove the outer 'circle' of buckets you'd be left with a grid of 6 buckets wide and 4 buckets high. Whilst the total water is now 'only' 120 litres each bucket still has 5 litres in it, so each bucket has been exposed to the same amount of rain (light). This to me suggests that the shot noise of each 'bucket' would therefore be the same, and therefore dynamic range would be the same. I don't see what influence the removed outer buckets would have on the inner ones?

I'm clearly missing something in my logic, however I followed a link from the link David sent me above and it seems to confirm my suspicions.


In this link they're trying to demonstrate that a larger sensor has less noise, but there example shows otherwise. They themselves say there's not much difference initially however if they then downsample the larger sensor there's less noise. This suggests to me that it is not then necssarily true that larger sensors are better at noise handling (and DR) just becasue they are larger per se but it is either because they have bigger photosites and/or downsampled more for viewing.

I'm also still struggling to see why noise affects dynamic range. Reading the link that David posted about shot noise suggests that not all pixels are subject to noise, or at least the same degree, therefore in my head at least some pixles are subject to the brightest brights and some to the darkest darks and therefore have the same dynamic range. Yes a larger sensor may have more of the brightest brights and more of the darkest darks but AFAIK DR is not a measure of how mcuh brightness/darkness there is but simply the difference between the bright and dark parts.
As well as the size and design of the photosites, you need to think about the number of photosites per unit area.

For the same sized subject you will have fewer photosites per unit area with a crop compared to the number of photosites per unit area when using the full sensor.

So although the photosites are the same size, whether cropped or FF, there are fewer of them per unit area, for the same area of the subject ,when you crop the sensor.

A simplified way of thinking about this is to imagine a 2m long "dark" subject, that when photographed on FF is 15mm long on the sensor, which would be 10mm long when using the sensor cropped (assuming the same angle of view in both cases).

To make the arithmetic simple, assume the sensor photosites are 0.01mm in size. So, with FF, the 2m long subject is represented on the sensor by 1500 photosites, but on the crop it's only 1000 photosites.

At low levels of light, the photons are being reflected randomly and in low numbers, so for any given exposure, the 1500 photosites are likely to capture more photons than the 1000 photosites.

The more photons captured, the greater the likelihood of the detail captured exceeding the base noise, Meaning, that with the same sensor, using it FF will give greater dynamic range than when you use it cropped (assuming you maintain the same angle of view, with both the FF and cropped frame)
 
As well as the size and design of the photosites, you need to think about the number of photosites per unit area.

For the same sized subject you will have fewer photosites per unit area with a crop compared to the number of photosites per unit area when using the full sensor.

So although the photosites are the same size, whether cropped or FF, there are fewer of them per unit area, for the same area of the subject ,when you crop the sensor.

A simplified way of thinking about this is to imagine a 2m long "dark" subject, that when photographed on FF is 15mm long on the sensor, which would be 10mm long when using the sensor cropped (assuming the same angle of view in both cases).

To make the arithmetic simple, assume the sensor photosites are 0.01mm in size. So, with FF, the 2m long subject is represented on the sensor by 1500 photosites, but on the crop it's only 1000 photosites.

At low levels of light, the photons are being reflected randomly and in low numbers, so for any given exposure, the 1500 photosites are likely to capture more photons than the 1000 photosites.

The more photons captured, the greater the likelihood of the detail captured exceeding the base noise, Meaning, that with the same sensor, using it FF will give greater dynamic range than when you use it cropped (assuming you maintain the same angle of view, with both the FF and cropped frame)
Thanks that makes sense,…. to a point. I can understand that logic in terms of capturing the scene, but once captured I then don’t understand why the DR drops by cropping the image. In my head once that image is captured it’s then set in stone, by cropping an image you’re not changing the brightness or darkness (unless you specifically crop out the sun for example).
 
Thanks that makes sense,…. to a point. I can understand that logic in terms of capturing the scene, but once captured I then don’t understand why the DR drops by cropping the image. In my head once that image is captured it’s then set in stone, by cropping an image you’re not changing the brightness or darkness (unless you specifically crop out the sun for example).
There are possibly a few going on when you crop an existing file. But I think your comment on cropping out the sun has merit.

Usually, the main subject is around the mid tones, and by cropping into the subject, and cropping out some of the darkest shadows and brightest highlights, you reduce the DR.
 
Most people just take two exposures with the same camera, but using two cameras could work.
There's a setup somewhere where they take two camera's and shoot video. Each camera is used for VR. And then combined in real time somehow with a very large DR. But ehh I can't remember where I saw this.
 
As well as the size and design of the photosites, you need to think about the number of photosites per unit area.

For the same sized subject you will have fewer photosites per unit area with a crop compared to the number of photosites per unit area when using the full sensor.

So although the photosites are the same size, whether cropped or FF, there are fewer of them per unit area, for the same area of the subject ,when you crop the sensor.

A simplified way of thinking about this is to imagine a 2m long "dark" subject, that when photographed on FF is 15mm long on the sensor, which would be 10mm long when using the sensor cropped (assuming the same angle of view in both cases).

To make the arithmetic simple, assume the sensor photosites are 0.01mm in size. So, with FF, the 2m long subject is represented on the sensor by 1500 photosites, but on the crop it's only 1000 photosites.

At low levels of light, the photons are being reflected randomly and in low numbers, so for any given exposure, the 1500 photosites are likely to capture more photons than the 1000 photosites.

The more photons captured, the greater the likelihood of the detail captured exceeding the base noise, Meaning, that with the same sensor, using it FF will give greater dynamic range than when you use it cropped (assuming you maintain the same angle of view, with both the FF and cropped frame)
Dynamic range has nothing to do with sharpness, however a black line on a white background will appear sharper than a medium grey line on a light grey background.
This has more to do with our eyes and brain more than anything photographic.
The tones that a sensor can capture is more to do with the number of photons that can be captured and counted at each photo site than any other factor. In this like many other things a good big one is better than a good small one.
 
Most people just take two exposures with the same camera, but using two cameras could work.
It could indeed be done. You could capture the darker tones on one and lighter tones on the other and fuse the images.
However it would be far easier to use exposure bracketing on a single camera, and fuse those. I have done that quite a few times on both panoramas and single images.
 
It could indeed be done. You could capture the darker tones on one and lighter tones on the other and fuse the images.
However it would be far easier to use exposure bracketing on a single camera, and fuse those. I have done that quite a few times on both panoramas and single images.
Unless you are shooting video and not stills.
 
Dynamic range has nothing to do with sharpness, however a black line on a white background will appear sharper than a medium grey line on a light grey background.
This has more to do with our eyes and brain more than anything photographic.
I'm not sure how this relates to my post
The tones that a sensor can capture is more to do with the number of photons that can be captured and counted at each photo site than any other factor. In this like many other things a good big one is better than a good small one.
That was fairly central to the point I was making, even though several other things are relevant.
 
It could indeed be done. You could capture the darker tones on one and lighter tones on the other and fuse the images.
However it would be far easier to use exposure bracketing on a single camera, and fuse those. I have done that quite a few times on both panoramas and single images.
I assume this was meant for @Woolsocks but I'm glad you agree with me :)
 
I had one of those, until the battery died - a bit like the early mobile phones, wore it on a strap over my shoulder, early 60's from memory . . .

But I don't think that that flash could have done anything in terms of a catchlight, far too far away and at very low power
Mine had more power than most modern pro flashes it would correctly expose 100asa colour film at F8 to lake a large wedding group at about 30 ft as the main light in the .evening.. it had a massive capacitor about 6 inches by 2.5 dia. One exploded on me at a catholic wedding during the service. Very embarrassing. Jonson's of hendon repaired it in a week. It was much more powerful than the top mecablitz which most weeding and press boys used. After that one Braun only brought out one more professional model, which was a pity, but metz out priced them by a mile, so they were never popular in the UK.
 
There are possibly a few going on when you crop an existing file. But I think your comment on cropping out the sun has merit.

Usually, the main subject is around the mid tones, and by cropping into the subject, and cropping out some of the darkest shadows and brightest highlights, you reduce the DR.
Again, this is understandable but when you see a camera's max DR measurement it's finite and not scene dependant.

Lets say you took a photo of this and assumed the black is 'true' black and the white 'true' white and the DR is scored at 15ev, I don't then see why if you crop it the DR would drop to say 13.5ev, the cropped image still contains 'true' black and 'true' white :thinking:


black-and-white-checkered-pattern-i78797.jpg
 
Again, this is understandable but when you see a camera's max DR measurement it's finite and not scene dependant.

Lets say you took a photo of this and assumed the black is 'true' black and the white 'true' white and the DR is scored at 15ev, I don't then see why if you crop it the DR would drop to say 13.5ev, the cropped image still contains 'true' black and 'true' white :thinking:


View attachment 435369
It would not.
But you have to very different variables, the total range of the subject and the maximum dynamic range of the sensor.
A photographer is always trying to match the one with the other by adjusting the exposure. Adjusting the Iso mostly just slides the two scales along each other. Which favours either the highlights or the shadows.

Noise is really a separate problem as it is always more visible in shadows and less exposed areas. However it can make discerning changes in tonality and detail difficult in the deeper tonal areas. This becomes worse the higher the ISO setting.
 
Again, this is understandable but when you see a camera's max DR measurement it's finite and not scene dependant.

Lets say you took a photo of this and assumed the black is 'true' black and the white 'true' white and the DR is scored at 15ev, I don't then see why if you crop it the DR would drop to say 13.5ev, the cropped image still contains 'true' black and 'true' white :thinking:


View attachment 435369
I am pretty sure that DR isn't measured from pure white to pure black.

It's from the point when you can just distinguish detail above pure black to point before you lose detail to pure white, It's why the DR measurements are fairly useless in practice, because "useful" shadow detail needs a lot more exposure than the value used to measure sensor DR.

While I understand your point, I think the general point I was making applies in practice.

As you crop into smaller and smaller areas of an image, that area is "likely" to contain a smaller range of tones than the entire image. I haven't actually tried it, but if the histogram in your software adjusts with a crop, try it, and see what happens.
 
Again, this is understandable but when you see a camera's max DR measurement it's finite and not scene dependant.

Lets say you took a photo of this and assumed the black is 'true' black and the white 'true' white and the DR is scored at 15ev, I don't then see why if you crop it the DR would drop to say 13.5ev, the cropped image still contains 'true' black and 'true' white :thinking:

I remember using a similar argument years ago. What I argued was that folding a picture in half shouldn't affect DR.

Comparing cameras with different sized sensors is very problematic as there's a likelihood that even cameras from the same manufacturer will use slightly different specs, hardware tech and in camera fiddling so the best way to see if sensor size truly affects DR is to use a camera with crop modes or just crop a picture as that is in effect what smaller sensors do... assuming the tech is the same and if we equalise things that affect the light coming in such as the lens and the aperture size.

I think one issue is the aperture. If I use MFT with a 25mm lens at f10 my aperture diameter is 2.5mm, with 50mm on FF at f10 my aperture diameter is 5mm. If we shoot different format sizes but equalise such things that could affect image quality the results could be more equal. For example if we shoot MFT at f5 maybe the results will be more similar to FF at f10 assuming the specifications, the sensors and the in camera fiddling are roughly comparable. There'd still be the differences which the additional magnification of the smaller format pictures would introduce though.
 
Last edited:
I am pretty sure that DR isn't measured from pure white to pure black.

It's from the point when you can just distinguish detail above pure black to point before you lose detail to pure white, It's why the DR measurements are fairly useless in practice, because "useful" shadow detail needs a lot more exposure than the value used to measure sensor DR.

While I understand your point, I think the general point I was making applies in practice.

As you crop into smaller and smaller areas of an image, that area is "likely" to contain a smaller range of tones than the entire image. I haven't actually tried it, but if the histogram in your software adjusts with a crop, try it, and see what happens.

Back in my student days there were two standards used in measuring dynamic range. The first used carbon black as the black point and magnesium oxide as the white point
The second used in metering with a SEII photometer was the first distinguishable tone above black and and the brightest non spectral white. Both were useful in plotting characteristic curves of film. Both the black. and white points were beyond what could be achieved on photographic paper.but they were a repeatable standard.

The black and white points could be used on a Weston meter using the U or O marks when establishing exposure settings.
 
Last edited:
I am pretty sure that DR isn't measured from pure white to pure black.

It's from the point when you can just distinguish detail above pure black to point before you lose detail to pure white, It's why the DR measurements are fairly useless in practice, because "useful" shadow detail needs a lot more exposure than the value used to measure sensor DR.

While I understand your point, I think the general point I was making applies in practice.

As you crop into smaller and smaller areas of an image, that area is "likely" to contain a smaller range of tones than the entire image. I haven't actually tried it, but if the histogram in your software adjusts with a crop, try it, and see what happens.
Yeah I understand what you’re saying, BUT when P2P and DXO measure DR they are using a set format for consistency and measuring the maximum range of the sensor, they are not measuring the DR of a scene so to speak.

If one part of the sensor can measure 15ev then surely another part of the sensor can therefore measure 15ev so no matter where on that sensor you take the measurement from it should still be 15ev :thinking:
I remember using a similar argument years ago. What I argued was that folding a picture in half shouldn't affect DR.

Comparing cameras with different sized sensors is very problematic as there's a likelihood that even cameras from the same manufacturer will use slightly different specs, hardware tech and in camera fiddling so the best way to see if sensor size truly affects DR is to use a camera with crop modes or just crop a picture as that is in effect what smaller sensors do... assuming the tech is the same and if we equalise things that affect the light coming in such as the lens and the aperture size.

I think one issue is the aperture. If I use MFT with a 25mm lens at f10 my aperture diameter is 2.5mm, with 50mm on FF at f10 my aperture diameter is 5mm. If we shoot different format sizes but equalise such things that could affect image quality the results could be more equal. For example if we shoot MFT at f5 maybe the results will be more similar to FF at f10 assuming the specifications, the sensors and the in camera fiddling are roughly comparable. There'd still be the differences which the additional magnification of the smaller format pictures would introduce though.
I understand different sensors having different DR, but if you’re just cropping the same sensor I don’t understand why DR changes.

Most info on’t internet states that larger sensors have better DR due to having larger photosites, but as we’ve discussed this isn’t true as high res sensors still have high DR, some being higher than lower res sensors. Also you have APS-C cameras like the D7200 having better DR than FF cameras such as the Canon 5DIV. The Canon has larger photosites it’s AND has higher res so less enlargement needed.

With regards to m4/3 I always used wider apertures to ‘compensate’ but still didn’t find them as malleable as FF files.

I’m starting to realise it’s far more complex than I realised, however I’m yet to find an explanation that I understand and makes sense from every aspect.
 
BUT when P2P and DXO measure DR they are using a set format for consistency and measuring the maximum range of the sensor, they are not measuring the DR of a scene so to speak.

Their measures of DR depend on sensor size as CoC is a factor in the calculations and it (nearly always) chosen for a particular format. I’ve not read their articles enough to check that though.
 
With regards to m4/3 I always used wider apertures to ‘compensate’ but still didn’t find them as malleable as FF files.

I’m starting to realise it’s far more complex than I realised, however I’m yet to find an explanation that I understand and makes sense from every aspect.

They're not but there's a lot in play here. The pixel size and density will be different and Gosh knows what the differences are in the in ISO, WB and the in camera pipeline. I think this is why with questions such as this the only thing we can say for certain is that when comparing smaller sensor cameras to larger sensor cameras of roughly the same generation the smaller formats do perform less well but as to why, there must be multiple things in play so the only real way we can test the theory is to take one image from one camera, crop it and compare the uncropped and cropped pictures.

If we don't do it like this we're introducing different lenses, sensors and in camera fettling and they will impact on the final image.
 
Last edited:
Their measures of DR depend on sensor size as CoC is a factor in the calculations and it (nearly always) chosen for a particular format. I’ve not read their articles enough to check that though.


Ah they even have a page with more detail:

Thanks, I'll give these a read (y)
 
Yeah I understand what you’re saying, BUT when P2P and DXO measure DR they are using a set format for consistency and measuring the maximum range of the sensor, they are not measuring the DR of a scene so to speak.

If one part of the sensor can measure 15ev then surely another part of the sensor can therefore measure 15ev so no matter where on that sensor you take the measurement from it should still be 15ev :thinking:

I understand different sensors having different DR, but if you’re just cropping the same sensor I don’t understand why DR changes.

The dynamic range of the sensor is made up from the dynamic range of every individual photosite where the lowest amount of light needed to record a signal greater than the base noise level gives you the deepest available black with detail. and the highest amount of light that a photosite can record before becoming full and blown to complete white, gives you the maximum available white with detail.

As sensors (photosites) extend their dynamic range it is usually extended into the recordable shadow detail,

Which takes me back to my earlier explanation of why the number of photosites per unit area is important. Within the shadow areas, the more photosites capturing light the greater the chance of recording detail and thus extending the dynamic range. When you only use part of the sensor you have fewer photosites available, so there is less chance of capturing enough photons to obtain shadow detail, and thus and area of shadow will go pure black quicker in the cropped sensor than it would when using the full frame and there is a reduction in dynamic range.

Maybe going into more detail about photons would help. The theory of light energy arriving at a photosite as discrete photons of energy is the starting point.

When a lot of light is being reflected from a subject, there is a constant stream of Photons. But In the deepest shadow areas far fewer photons are being reflected and these are being reflected randomly. It isn't a constant stream of photons.

As before I'm going to use simple numbers just to demonstrate the idea.

When you make the exposure, which say it’s a second, the photosites receiving light from the bright areas of the subject will get hit with thousands of photons. So exposure in the brigher areas are more consistent and predictable

But during that same 1s exposure a photosite receiving photons from the deepest shadow areas will receive a small number of randomly reflctedphotons,

It might be none, it might be 20 or it might be hundreds. So an individual photosite might record the part of the subject it's receiving light from as pure black. However the sensor next to it, receiving a slightly different number of random photons might record some detail.

The bigger the photsosite, the greater the chance of it recording some the low frequency random photons. BUT equally the more photosites receiving light from an area of the subject, the more chance there is that "some" of them will record detail. And the demosaicing process can extrapolate data from the photosites with data to the areas where a photosite didn't record data.

Which as a say I takes me back o my earlier example of where the FF sensor was using data from 1500 photosites but to gather information from the same size of subject area as the cropped sensor had only 1000 photosites available.

To give an unrealistic "example" comparison,
In a very deep shadow the 1000 photosites available in the cropped sensor might not record any detail, but because you have an additional photosites available in the full frame sensor, covering the same subject area, you may still have 1000 Photosites failing to record any detail, but some the additional 500 photosites might record somel.

This provides some detail above pure black, and extends the dynamic range of the FF sensor compared to the crop sensor.

This also applies to the DXO testing, because their tests will have this exact same issue when measuring details in the shadow areas.
 
Back
Top