Yeah I understand what you’re saying, BUT when P2P and DXO measure DR they are using a set format for consistency and measuring the maximum range of the sensor, they are not measuring the DR of a scene so to speak.
If one part of the sensor can measure 15ev then surely another part of the sensor can therefore measure 15ev so no matter where on that sensor you take the measurement from it should still be 15ev
I understand different sensors having different DR, but if you’re just cropping the same sensor I don’t understand why DR changes.
The dynamic range of the sensor is made up from the dynamic range of every individual photosite where the lowest amount of light needed to record a signal greater than the base noise level gives you the deepest available black with detail. and the highest amount of light that a photosite can record before becoming full and blown to complete white, gives you the maximum available white with detail.
As sensors (photosites) extend their dynamic range it is usually extended into the recordable shadow detail,
Which takes me back to my earlier explanation of why the number of photosites per unit area is important. Within the shadow areas, the more photosites capturing light the greater the chance of recording detail and thus extending the dynamic range. When you only use part of the sensor you have fewer photosites available, so there is less chance of capturing enough photons to obtain shadow detail, and thus and area of shadow will go pure black quicker in the cropped sensor than it would when using the full frame and there is a reduction in dynamic range.
Maybe going into more detail about photons would help. The theory of light energy arriving at a photosite as discrete photons of energy is the starting point.
When a lot of light is being reflected from a subject, there is a constant stream of Photons. But In the deepest shadow areas far fewer photons are being reflected and these are being reflected randomly. It isn't a constant stream of photons.
As before I'm going to use simple numbers just to demonstrate the idea.
When you make the exposure, which say it’s a second, the photosites receiving light from the bright areas of the subject will get hit with thousands of photons. So exposure in the brigher areas are more consistent and predictable
But during that same 1s exposure a photosite receiving photons from the deepest shadow areas will receive a small number of randomly reflctedphotons,
It might be none, it might be 20 or it might be hundreds. So an individual photosite might record the part of the subject it's receiving light from as pure black. However the sensor next to it, receiving a slightly different number of random photons might record some detail.
The bigger the photsosite, the greater the chance of it recording some the low frequency random photons. BUT equally the more photosites receiving light from an area of the subject, the more chance there is that "some" of them will record detail. And the demosaicing process can extrapolate data from the photosites with data to the areas where a photosite didn't record data.
Which as a say I takes me back o my earlier example of where the FF sensor was using data from 1500 photosites but to gather information from the same size of subject area as the cropped sensor had only 1000 photosites available.
To give an unrealistic "example" comparison,
In a very deep shadow the 1000 photosites available in the cropped sensor might not record any detail, but because you have an additional photosites available in the full frame sensor, covering the same subject area, you may still have 1000 Photosites failing to record any detail, but some the additional 500 photosites might record somel.
This provides some detail above pure black, and extends the dynamic range of the FF sensor compared to the crop sensor.
This also applies to the DXO testing, because their tests will have this exact same issue when measuring details in the shadow areas.