Hi,
I’m trying to wrap my head around the rolling shutter effect, specifically why it happens.
I’m having a hard time understanding how the readout speed affects the image. If I understood correclty, when in electronic shutter mode the pixels are exposed as indicated by the shutter speed (e.g. at 1/1000 each pixel is exposed for 1/1000 of a second).
If the readout takes 1/100 s to scan the entire sensor, what happens exactly when I take the picture? Do the pixels start firing sequentially as the shutter speed dictates (i.e. 1/1000 s each, sequentially)? If that is the case, do they wait for the readout to catch up or do they continue firing? If the latter, by the time the readout reaches the second pixel, the eleventh pixel is firing, so there are 10 pixel between the one firing and the one being read. Does it work like this?
If the pixels are exposed for 1/1000 s and then turned off and their value stored, wouldn’t that mean that the image should not be affected? I mean, they saw the subject for 1/1000 s and the motion should be frozen, they are just waiting for the value to be read. Just like if you asked 10 people to open their eyes for 1 second (shutter speed), one after the other, and draw what they see. They saw if for one second each, so at most the difference in the position of what they saw should cover 10 seconds. Then they can take hours to draw what they saw (readout speed), but what they saw specifically wouldn’t be afftected by how long it takes them to draw it. Am I wrong here maybe?
Also, in general, why is mechanical shutter not as affected (if affected at all) by the rolling shutter effect? Does the sensor capture light differently when in mechanical shutter mode?
I just don’t get it. I feel like I’m close to understanding why, but I still don’t.
I know I’m probably weird for focusing so much on something technical like this, but it just bugs me so much.
Any help is greatly appreciated, really.
Heh, sorry if I sound so pedantic, but I thought that would be my final message. Let me try with this one.
Yes, the steps are empty, expose, measure. I was just trying to explain the difference I’m finding in the electronic handling of these steps between MS and ES. In both cases though, the pixels experience the same three steps in the same sequence: empty, expose, measure.
The “only” difference I’m finding is that in MS the action of “measuring” the pixel happens effectively differently than in ES because, at each photo site, the action of “measuring” is only, well, reading sequentially the amount collected, which is limited in speed to the readout speed.
In ES, the action of “measuring” is preceded every time by "emptying, exposing"sequentially, all controlled electronically.
This makes me think that there is some liberty in choosing when to empty and expose pixels electronically, and it’s not really limited neither by some kind of speed, nor some kind of sequence: in MS you don’t really care because the curtains do the job, in ES you must control precisely when to empty a pixel and when to expose it.
This would also agree with how I understand EFC-S works: your closing of the curtains is limited by the curtain speed, so you have to empty and expose those pixels at the right time to let the curtain end the exposure so that you collect the specified amount of light. So if you’re shooting at 1/1000 and the curtain closes at 1/250, you follow the ES method and empty+expose those buckets a tiny bit before the curtain passes over them. And since after the curtain it’s darkness, just like in MS you can wait for the readout without worry.
Therefore, it is to my understanding that, electronically speaking, emptying and exposing the pixels can happen at very high speeds, independently from the measuring step: in MS, you empty and expose all of the pixels at once (from the sensor’s POV, of course it’s the curtain that does the exposure job), and then you measure them - the measuring is done once, “alone”; in ES you empty, expose, and measure each pixel individually - the measuring is coordinated with emptying and exposing (which also agrees with your beautiful pseudo-code).
EDIT: maybe it can be reduced to the emptying action alone. You can empty whenever you want, and since the pixel always gathers light as soon as you’re done emptying it, as long as you time your emptying action correctly the exposure can happen however you prefer: by the curtains in MS, by emptying at the right time so that the pixel is exposed correctly for when the readout happens in ES, and right before the curtain passes over in EFC-S
To me, in EFC-S this takes the best of both worlds: no fear of distortion because the reading phase is done in darkness, higher theoretical shutter (i.e. exposure) speed because you can empty+expose each pixel electronically even almost as the curtain closes above them.
I hope you understand that i truly appreciate your help, and I’m sorry I keep hammering at this stuff. I truly appreciate all the patience you can have for me.
I think you’ve got it, especially with the tweak in your edit. I’m happy to help where I can, hopefully my verbosity didn’t get too in the way.
Your verbosity did not get in the way, at all. I appreciate every second you spent trying to help a random stranger on the internet understand something that, to you, was probably straightforward.
So thank you very very much and have a wonderful day!
We’re all just a bunch of random people seeking to learn more, so I’m glad to hear it helped!