Graphics Enthusiast and Performance Devotee

Gunnar Sletta

Click here to edit subtitle

Blog

Animated Blur for Static Images

Posted by gunnar@sletta.org on October 11, 2014 at 4:05 AM

I mentioned before how blurring is a pretty expensive operation so it got me thinking. For dynamic content, we need to do the full thing, and what we have in Qt Graphical Effects is decent, but for static images there are techniques to do it a lot faster.

 

 

We can load images from disk, do a bit of processing, and then be able to dynamically animate from sharp to heavily blurred with close to the same rendering performance as blending a normal image, and with minimal memory overhead. Inspired by articles like this and this, I implemented something I think can be quite useful in Qt Quick. This is done by using custom mipmaps with mipmap bias. A QML module implementing this feature can be found here

 


Below are two screenshots from the test application. The blur ratio is adjusted to keep focus on either the droids in the foreground (topmost) or on the walkers in the background (lowermost). 





Implementation Details 


Mipmapping is normally used to improve performance and reduce aliasing in downscaled textures. In our case, we want to use the mipmap levels to render blurred images instead. Look at the following example:

 



The original image is on the left. Towards the right are the mipmap levels 1, 2 and 3 for this image, generated using glGenerateMipmaps(). The lower row shows the mipmap levels in their original size and the topmost shows the mipmap levels scaled back up to the size of the original image. Using mipmap bias, an optional argument to the texture sampling function, we can pick a higher mipmap level. When combined with GL_LINEAR_MIPMAP_LINEAR, we can interpolate both in x and y and also between the mipmap levels, seamlessly moving from sharp to coarse. However, the default mipmaps give quite blocky results.


The mipmap levels above are created automatically using glGenerateMipmaps(). It is possible to specify each level manually as well. In the following image, I’ve created mipmap levels by scaling the image down with linear sampling (basically 2x2 box sampling), then applied a 3x3 box blur. The process is repeated for each level based on the previous image. It is not a true gaussian blur, but it comes out quite decent. 

 


 

The result is that the mipmap levels are now blurred instead of blocky. Using these mipmap levels and the mipmap bias, we can now select an arbitrary blurryness with a single texture2D() call in the fragment shader. The resulting performance is equivalent to that of blending a normal image, which means this approach is fully doable on mobile GPUs. The mipmap levels adds about 1/3 to the memory consumption, which is quite acceptable. 

 

 

Please note that because of the manual downscaling and box blur, the initial texture upload is significantly slower than a plain upload, so it needs be done in controlled places, like a loading screen or during application startup, to avoid hickups in the animation. For a very controlled setup, one could imagine generating all the mipmap levels offline and just upload them in the application. That would save a bit of time, but makes it a bit less flexible. Note also that mipmapping does not mix with a texture atlas, so each image needs to be a separate texture, which means the blurred images do not batch in the scene graph renderer. That means that while we can have several large images being blurred, we cannot have hundreds or thousands of small ones in a scene and expect to sustain a velvet 60 fps. 


 

So, if you want to incorporate animated blurs of static images into your application or game, have a look at BlurredImage.

 

Categories: Qt, Graphics, Performance