Alright, let's re-rail this mutha. GPU texture sampling and 2D rotation.
Here's a close-up of a 2x2 pixel texture, filled with glorious programmer colors:
When a 2x2 pixel texture is applied to an axis-aligned, 2x2 pixel quad, there will be a 1:1 relationship between the rendered pixels and the pixels in the source texture (AKA "texels"). The top left pixel gets its color from the top left texel, the top right pixel gets its color from the top right texel and so on.
However, textures are referenced using UV coordinates, not absolute pixel coordinates. UV coordinates are unit-less, relative values in the range [0.0, 1.0]. A horizontal value of 0.0 corresponds to the very left edge of the texture, 1.0 corresponds to the very right edge and the same goes for top / bottom (in the UV coordinate system, all textures are therefore 1.0 wide by 1.0 tall, regardless of their pixel dimensions). The center of the texture is (0.5, 0.5).
This means that each texel essentially occupies an area
of UV space. For example, the red texel's top left corner is at (0.0, 0.0) and its bottom right corner is at (0.5, 0.5). Its center would be (0.25, 0.25).
As the textured quad is rendered, the GPU needs to look at the source texture to know what color to give each pixel making up the quad. This is called texture sampling. For every pixel being rasterized, the GPU must read (sample) its color from the corresponding location in the texture. That location is determined by mapping the pixel coordinates (relative to the geometry being rasterized) to UV coordinates. For vanilla 2D quad texturing, the leftmost, topmost pixel's color is sampled from the top left of the texture and the rightmost, bottommost pixel's color is sampled from the bottom right of the texture etc. Texels are typically sampled based on the calculated center
of the pixel being rendered, which in the case of this axis-aligned 2x2 quad translates to pixel (0, 0) being sampled at UV (0.25, 0.25) (red), pixel (1, 0) at (0.75, 0.25) (green), pixel (0, 1) at (0.25, 0.75) (white) and pixel (1, 1) at (0.75, 0.75) (blue):
Texture sampling can be done in somewhat different ways. The most basic is point sampling
. With point sampling it doesn't matter where inside the UV area covered by a texel you do the sampling. If pixel (0, 0) was sampled at UV (0.375, 0.375) instead of at the center of the texel (0.25, 0.25), it would still be the same red.
Then there's linear filtering
. With linear filtering, the texture looks like this in the eyes of the texture sampler
: The texel's "true" color is represented at the center of the texel, and the areas in-between are bilinear interpolations
of the four surrounding texel colors.
The underlying texture is not changed in any way (a texture just consists of discrete pixels like any other bitmap image; the UV mapping is simply an abstraction on top), but in this mode the GPU will read four colors from the texture and create a biased blend between them based on the UV coordinates.
Had the original 2x2 quad been rendered using linear filtering and with the sampling coordinates offset as in the above illustration (pixel (0, 0) sampled from UV (0.375, 0.375), pixel (1, 0) from (0.75, 0.25), pixel (0, 1) from (0.475, 0.525) and pixel (1, 1) from (0.625, 0.875)), the resulting image would have looked like this instead:
Now, let's take a look at how this affects something like a 2D rotation. Here's a close-up of an axis-aligned 8x8 pixel quad. If we put an 8x8 texture onto this, we'd get that same pixel-perfect 1:1 mapping as before.
Then apply a 15 degree rotation. Even though geometry can be rotated freely and rather precisely in abstract 2D space, the rasterizer has to conform to a rigid pixel grid and suddenly the mapping from rendered pixels to texels isn't as clear cut anymore (there are even more pixels than texels here, 65 vs 64 in the hypothetical 8x8 texture).
Assuming the same center-of-pixel strategy as before, the sampling points for these pixels would be here..
..while the green overlay indicates the pixel grid corresponding to the "ideal" 1:1 mapping we get when axis-aligned:
Normalizing the rotation relative to the texture being sampled, it's easy to see that the sampling points no longer occur at the texel centers. If the sampler is using linear filtering, this means that the samples will be weighted interpolations between the "true" colors actually present in the underlying source texture, effectively providing an anti-aliasing effect.
Examples of the above, courtesy of ApoorvaJ: