Optional subpixel sampling for high-Q zoom
Posted: Fri Jun 01, 2007 1:03 pm
This is a trick that some camera manufacturers use to squeeze more resolution out of their displays (where 320x240 is considered high-res); the infamous MS ClearType also employs it. It requires that the subpixel structure of the display (monitor) be known, since just that is used (something a user can find out with a loupe, on my trusty Samsung 191T it's the classic horizontal R-G-B pattern, not sure whether PC monitors with triangular patterns also exist, but I think B-G-R and other permutations are used).
Normally one assumes that the monitor displays all-in-one pixels responsible for all of R, G and B. When using subpixel sampling, however, the R, G and B channels of an interpolated image are sampled slightly shifted according to the subpixel structure.
Imagine this monochrome 3x1 "image":
along with this kind of 1x1 "monitor":
Normally the output might look something like this:
With subpixel sampling, the result could be:
It is obvious that the luminance resolution has been increased, but also that if we're unlucky, artifacts may be introduced, such as one bright white car light strongly zoomed out becoming one red subpixel only. (It therefore is something like the inverse of Bayer interpolation.)
I am not quite sure how that goes together with our beloved sampling theorem - as with the normal high-Q display, this is an undersampling problem requiring lowpass filtering, but this time with R, G and B shifted. OK, maybe I've got it. It is obvious that in the lowpass-filtered image (a step I left out in my example, which therefore wasn't too well chosen), a scenario like an object occupying only one subpixel must not occur, as in our R-G-B scenario, this would mean a frequency of 3/2 fs (and harmonics) horizontally! Therefore the possible increase in luminance resolution is likely to be limited, but a smoother image display should be possible nonetheless, and isn't quality always an argument?
Wouldn't this be worth a try, or what do you think?
Normally one assumes that the monitor displays all-in-one pixels responsible for all of R, G and B. When using subpixel sampling, however, the R, G and B channels of an interpolated image are sampled slightly shifted according to the subpixel structure.
Imagine this monochrome 3x1 "image":
Code: Select all
X . #
Code: Select all
R G B
Code: Select all
U U U
Code: Select all
X . #
I am not quite sure how that goes together with our beloved sampling theorem - as with the normal high-Q display, this is an undersampling problem requiring lowpass filtering, but this time with R, G and B shifted. OK, maybe I've got it. It is obvious that in the lowpass-filtered image (a step I left out in my example, which therefore wasn't too well chosen), a scenario like an object occupying only one subpixel must not occur, as in our R-G-B scenario, this would mean a frequency of 3/2 fs (and harmonics) horizontally! Therefore the possible increase in luminance resolution is likely to be limited, but a smoother image display should be possible nonetheless, and isn't quality always an argument?
Wouldn't this be worth a try, or what do you think?