Normally one assumes that the monitor displays all-in-one pixels responsible for all of R, G and B. When using subpixel sampling, however, the R, G and B channels of an interpolated image are sampled slightly shifted according to the subpixel structure.

Imagine this monochrome 3x1 "image":

Code: Select all

`X . #`

Code: Select all

`R G B`

Code: Select all

`U U U`

Code: Select all

`X . #`

I am not quite sure how that goes together with our beloved sampling theorem - as with the normal high-Q display, this is an undersampling problem requiring lowpass filtering, but this time with R, G and B shifted. OK, maybe I've got it. It is obvious that in the lowpass-filtered image (a step I left out in my example, which therefore wasn't too well chosen), a scenario like an object occupying only one subpixel must not occur, as in our R-G-B scenario, this would mean a frequency of 3/2 fs (and harmonics) horizontally! Therefore the possible increase in luminance resolution is likely to be limited, but a smoother image display should be possible nonetheless, and isn't quality always an argument?

Wouldn't this be worth a try, or what do you think?