Recently, I’ve been playing Octopath Traveler, and I’m very disappointed with the poor texture filtering seen in this game. It is PS1-level, with severe shimmering artifacts, which ruins the nice pixel art. Tastefully merging retro pixel art with 3D environment is a very cool aesthetic to me, so I want it to look right. Taking a signal processing approach to the problem, I wanted to see if I could solve the issue in a mathematically sound way.
Correctly filtering pixel art is a challenge, especially in a 3D environment, because none of the GPU hardware assisted filtering methods work well. We have two GPU filters available to us:
- NEAREST/POINT: Razor sharp filtering, but exhibits severe aliasing artifacts.
- LINEAR: Smooth, but way too blurry.
Our goal is to preserve the pixellated nature of the textures, yet have an alias-free result. A classic solution for this problem is to pre-scale the texture by some integer multiple with NEAREST, then sample this texture with LINEAR filtering. While this works reasonably well, it costs a lot of extra VRAM and bandwidth to sample huge textures which mostly contain duplicate pixels. Duplicating pixels also puts a maximum level on how sharp the pixels can become. On top of that, straight LINEAR still has some level of aliasing, as LINEAR is not a very good low-pass filter with its triangular kernel.
A flawed assumption of texture sampling using LINEAR is also that each sample point in the texture is basically a dirac delta function, i.e. the sample value only exists right at the texel center, and thus has an infinitesimal area. The pixel art assumption is that each texel has proper area, the sample point exists across the entire texel. For bandlimited signals, i.e. “natural” images, the dirac delta assumption is how you sample, but pixel art is not a sampled bandlimited signal.
Filtering textures in 3D means we need to work well in many different scenarios:
- Magnification
- Minification
- Rotation
- Scale
- Anisotropy / uneven scaling (look at the texture at an angle)
For minification, we are going to rely on the existing texture filtering hardware to do mip-mapping and anisotropic filtering for us. For minification, we are going to assume it is a good enough approximation to the true solution. What we want to focus on is magnification, as this is where the blurring issues with LINEAR become obvious.
Aliasing from hell
It’s even worse in motion.
Zoomed in 4x with pixel duplication.
Blurry mess
Straight LINEAR, in linear space.
Zoomed in 4x with pixel duplication.
Smooth and sharp
Here’s my method.
Zoomed in 4x with pixel duplication.
Bandlimited sampling
To understand the derivation, we must understand what bandlimited sampling means. All signals (audio or images) have a frequency response. Nyquist’s theorem tells us that if we sample at some frequency F, there must be no frequency component over F / 2 in the signal, or it aliases. (F / 2 is also called the Nyquist frequency.) Our input signal, i.e. a series of dirac delta functions, has an infinite frequency response. Therefore, we must convolve a low-pass filter on that signal to reject as much energy above the Nyquist frequency as possible before we sample it again. In LINEAR sampling, a triangular kernel is applied, which acts as a low-pass filter, but it is not very good at removing high frequencies. NEAREST is effectively just applying a rect filter, which is the worst low-pass filter you can have.
Triangle kernel:
Rect kernel:
For completeness, there exists a perfect low-pass filter, sinc. However, it is purely theoretical as its width is infinitely large, a common approach is to limit the extent of the sinc using a cleverly chosen window function, but now we lose the perfect low-pass. We can get as close as we wish to the perfect low pass by using a larger windowing function, but it’s rarely practical to go this far except for image/video resizing, where we are able to use separable filtering in multiple passes with precomputed filter coefficients. We will not have that luxury for texture filtering in a 3D setting.
We also get negative filter coefficients, which usually manifests itself as “ringing” or “haloing” when filtering graphics. This is something we would like to avoid when filtering pixel art as well. We also need to avoid negative values in the kernel because we will need it for a crucial optimization later.
We will just need to come to terms that we can never get perfect bandlimiting, and we will need to be practical about choosing our filter, hence pseudo-bandlimited.
Picking a practical filter kernel
To sample pixel art, we actually need to apply two filters at once, which complicates things. First, we need to apply a rect kernel (NEAREST filter) to give our texel some proper area. We then apply a low-pass filter which will aim to band-limit the rect. Applying two filters after each other is the same as convolving them together. Convolution is an integral, so now we have some constraints on our filter kernel, because it needs to be cheap to analytically integrate. LUTs will be too costly and annoying to use.
A key point is that the low-pass filter kernel needs to adapt to our sampling rate of the texture. Basically, we need to be band-limited in screen space, not texture space. If we have a filter kernel
we should be able to change it to
where d is the screenspace partial derivative in either X or Y.
// Get derivatives in texel space.
// Need a non-zero derivative since we need to divide it later.
vec2 d = max(fwidth(uv) * texture_size, 1.0 / 256.0);
For our filter kernel, we could pick between:
- Triangle (piece-wise integration, annoying)
- Rect (lol)
- Polynomial (Easy to integrate)
- Cosine (Easy to integrate)
- Gaussian (No analytic integration possible)
- Windowed sinc (negative lobes and super difficult integral, no thanks)
I chose a cosine kernel. If we think about Taylor expansions, a polynomial kernel and cosine is basically in the same ballpark. Cosine is not a perfect low-pass by any means, but it’s pretty good for our purpose here.
The cosine kernel will be
The normalization factor is to make sure the area of the kernel is 1. The rect kernel is
To get our filter with a given d, we will convolve.
The integration boundaries need to be limited to the range of W and R. If we solve this, we get
The brackets in the integration range denote a signed saturate, i.e. clamp(x, -1, 1).
Here’s how the kernel look for different d values:
d = 1:
d = 0.5:
d = 0.25:
d = 0.1:
Nice, so as we can see, the filter kernel starts off fairly smooth, but sharpens into a smoothly rolling off rect as we sample with a higher and higher resolution.
The 2×2 filter (d <= 0.5)
From the filter kernel above, we can see that if d <= 0.5 (LOD = -1), the extent of the filter kernel is 1 pixel. This means we can implement the kernel by a simple 2×2 kernel, or as we shall see, a single bilinear filter tap. For the implementation, we are going to assume we are sampling between two texels, where x is the phase, in range 0 to 1.
We can implement this as a simple lerp. Instead of evaluating two sines, we’re just going for one which implements the transition from 0 to 1.
This is very similar to a smoothstep, which explains why smoothstep techniques for this kind of filtering works so well. sin might be rather expensive on your GPU targets, so instead we can use a simple Taylor expansion to get a very good approximation to the sine
In fact, if we use smoothstep, we would get a filter kernel which is the derivative of smoothstep. Now, if we compute the result for both and X and Y dimensions, we will have lerp factors for both dimensions. Since our filter weights are all positive, and our kernel is separable we can make use of bilinear filtering to get the correct result.
// Get base pixel and phase, range [0, 1).
vec2 pixel = uv * texture_size - 0.5;
vec2 base_pixel = floor(pixel);
vec2 phase = pixel - base_pixel;
// We can resolve the filter by just sampling a single 2x2 block.
mediump vec2 shift = 0.5 + 0.5 * taylor_sin(PI_half * clamp((phase - 0.5) / min(d, vec2(0.5)), -1.0, 1.0));
vec2 filter_uv = (base_pixel + 0.5 + shift) * inverse_texture_size;
As d increases, we should no longer use our filter since we can only support up to d = 0.5, so we implement something ala trilinear filtering where we lerp between our ideal LOD = -1, and LOD = 0, where we fully sample with a normal trilinear/anisotropy filter. This implementation will only require two bilinear texture lookups, one for the d <= 0.5 sampling, and one for the d > 0.5 sampling. These two results need to be lerped.
The 4×4 filter (d <= 1.5)
Now, I’m heading into more interesting territory. While the good old smoothstep with fwidth is a well known hack, it cannot deal with larger kernels than 2×2. Using our success with replacing 4 filter taps with a single bilinear we’re going to continue implementing a 4×4 kernel with 4 bilinear taps. If we support a 4×4 filter kernel we can have our nice filter even for some slight minification as well. It’s going to require a lot of ALU, so we can split the implementation into the case where d <= 0.5, use the single tap, d <= 1.5, use this 4 tap method, otherwise, just sample normally. If we remove this case, we effectively have a “speed hack” mode for slower devices.
The filter coefficients for each element in the 4×4 grid will be
u and v are the phase variables in range [0, 1] as mentioned earlier. Since the filter is separable, we can compute X and Y separately and perform an outer product to complete the kernel.
To evaluate four F values, we only need to compute 5 sines (or Taylor approximations), not 8, because F(u) shares a value with F(u + 1), and so on. For d = 1.5, the filter kernel for one dimension will look like
Since we compute X and Y separate, we end up with a cost of 10 Taylor approximations per pixel, ouch, but GPUs crunch this like butter.
Now, each 2×2 block of this kernel can be replaced with one bilinear lookup and a weight.
// Given weights, compute a bilinear filter which implements the weight.
// All weights are known to be non-negative, and separable.
mediump vec3 compute_uv_phase_weight(mediump vec2 weights_u, mediump vec2 weights_v)
{
// The sum of a bilinear sample has combined weight of 1, we will need to adjust the resulting sample
// to match our actual weight sum.
mediump float w = dot(weights_u.xyxy, weights_v.xxyy);
mediump float x = weights_u.y / max(weights_u.x + weights_u.y, 0.001);
mediump float y = weights_v.y / max(weights_v.x + weights_v.y, 0.001);
return vec3(x, y, w);
}
Is 4×4 worth it?
The difference between 4×4 and 2×2 is very subtle and hard to show with still images. The difference manifests itself around LOD -0.5 to 0.5, i.e. close to 1:1 sampling. 2×2 is sharper with more aliasing while the 4×4 kernel remains a little blurrier but alias-free. For most use cases, I expect the 2×2-only method to be good enough, i.e. the good old “smoothstep/sine & fwidth”, but now I’ve come to that conclusion through math and not random graphics hackery.
Performance
A quick and dirty check on RX 470 @ 1440p
- Full screen with all pixels hitting 4×4 sampling case: 441.44 µs
- Full screen with all pixels hitting 2×2 sampling case: 207.68 µs
Anisotropy
We naturally get support for anisotropy because we compute different filters for X and Y dimensions. However, once max(d.x, d.y) is too large, we must fall back to normal sampling. Either a very blurry trilinear or actual anisotropic filtering in hardware.
Here’s a shot where we gradually go from crisp texels to normal, very blurry trilinear.
The green region is where d <= 0.5, 1 bilinear fetch, blue is where d < 0.5, d <= 1.5, 4 taps, red region is regular trilinear fallback. The blue region will fade towards the normal trilinear fallback to avoid any abrupt artifacts.
Code
The complete shader implementation (reuseable GLSL header) can be found in: https://github.com/Themaister/Granite/blob/master/assets/shaders/inc/bandlimited_pixel_filter.h
A test project to play around with the filter: https://github.com/Themaister/Granite/blob/master/tests/bandlimited_pixel_test.cpp
Go forth and filter your pixels correctly.