Quake fluids explained


In June 1996, Quake shook the FPS genre once again. When it first came out, it was one of the first games to make use of 3D textured models for pretty much everything (not the first one though, this honour goes to Bethesda's Terminator: Future Shock, which came out in late 1995). It featured a series of labyrinth-style levels, full of enemies, traps, an interactive environment, and awesome graphics and atmosphere in general.
Thanks to the use of BSP, a technique already used in Doom, it was possible to create huge levels with little impact on performance, and some Quake made use of this extensively, as well as prerendered static lighting.

The first version of Quake was of course for MS-DOS, it used a very optimized Software Renderer written in C and x86 assembly (video cards weren't really a thing until the 3dfx Voodoo came out a few months later), and it supported resolutions ranging from 320x200, up to 1280x1024. Of course, most people had to play at 320x240, or if you were really rich you could maybe play at 640x480, which is the best way to play Quake in my opinion, even today.

With the arrival of the first consumer GPUs, glQuake also came along. It ran a lot better than the original of course, since it was hardware accelerated, but it looked terrible compared to the original because many features from the original renderer were missing, most notably the lighting was very poor, which ruined the atmosphere.
After Quake was open sourced, of course, this terrible mess was improved and QuakeSpasm and other really good ports will look pretty much identical to the original. One feature however is still missing, and it's the one that I'll talk about in this article: fluids.
The original Quake used an interesting approach to warp textures as they are drawn onto a surface that was flagged as fluid. This effect is not easily replicated with modern shaders, so what most modern source ports do is just tessellate the water surface and move the vertexes around using the original algorithm. It works, but it doesn't look as good as the original, especially upclose.

As far as I know, the only modern source port that still supports the original software renderer is Mark V WinQuake, and it's the one I recommend if you want to play Quake on a modern machine.

So how does this warp effect work? Let's take a look at this capture from the original version of Quake:

The fluid surface

In the original Quake, a fluid surface was simply flat, with a texture applied on it, and marked as fluid so that the engine would animate it. This makes things a lot simpler because we can treat this as a regular surface with a 2D texture.

So, we have our fluid surface and an observer looking at it. We can trace a ray from the observer and onto the surface and see where they intersect.
We project this intersection point onto the surface and call the coordinates relative to the surface x and y, starting from 0,0 in the upper left corner of the surface (as it's common in computer graphics).

This image sums up what I just said:

Mapping the texture

This is the texture that we're going to map on to the surface:

Let's call this tex.
Quake assumed that these textures were 64x64 for simplicity, but we will consider any size texW x texH.
Coordinates in this texture also start from the upper left corner, as usual.

If we have x and y of the point metioned before and a scale for the texture , mapping the texture is very simple:

if(mappedX<0) mappedX+=texW;
if(mappedY<0) mappedY+=texH;

Now mappedX and mappedY tell us which pixel inside tex is the one that we want to display.

JavaScript stores image data in byte arrays of size 4*width*height, stored as RGBARGBARGBA... starting from the upper left corner and going left to right, up to down, so we need to consider that to copy the pixel.
Let's call out the array representing our output surface of size outW x outH, and tex the texture array mentioned before.

p=4*(y*outW+x); //index of the pixel to write to
tp=4*(mappedY*texW+mappedX); //index of the pixel to read from

At this point, we have our lava surface, but no animation yet.

Warp effect

The warp is done by adding a 2d function to the mapping that we just did.
Instead of pointing to
~~(x*scale) and ~~(y*scale)
we'll point to
~~((x+something)*scale) and ~~((y+something)*scale)

By looking at this animation we can see what the "something" function depends on

  • x, divided by a variable closeness to define how close/far we are from the surface
  • y, divided by a variable closeness to define how close/far we are from the surface
  • a timestamp t, multiplied by a variable speed to define how fast the animation will be

These values are used as input for a sine function, whose output is also multiplied by a variable intensity to make the animation more or less intense, which is then added to x and y before mappedX and mappedY are calculated.
The code should make this more clear:

if(mappedX<0) mappedX+=texW;
if(mappedY<0) mappedY+=texW;

Of course, finding values for scale,closeness, speed, intensity to get the exact same effect seen in Quake is a bit tricky.

Notice that inside the Math.sin function, we have swapped x and y. This is makes the phase different and creates the warp effect that we want instead of simply a "breathing" effect. Here's a comparison showing the difference.

The left one is wrong, the right one is correct.


Let's implement this algorithm in JavaScript, and draw it on a 2d Canvas element.

With JavaScript being JavaScript, we need to optimize the algorithm as much as possible. Here's my implementation:

var sinLUT=[];
for(var i=0;i<2*Math.PI;i+=0.01) sinLUT[sinLUT.length]=Math.sin(i)*16;
function sine(i){
    return sinLUT[(~~(i>=0?i:-i)%sinLUT.length)];
function quakeFluid(texture,canvas,scale,resScale,speed,intensity,closeness){
    if(!resScale||resScale<0.1) resScale=1;
    if(!speed) speed=1;
    if(!intensity||intensity>1.5||intensity<-1.5) intensity=1;
    if(!closeness||closeness<=0) closeness=1;
        var r=canvas.getBoundingClientRect();
        return r.top+r.height>=0&&r.left+r.width>=0&&r.bottom-r.height<=(window.innerHeight||document.documentElement.clientHeight)&&r.right-r.width<=(window.innerWidth||document.documentElement.clientWidth);
        if(!r||r<0.1) r=1;
        return resScale;
        var tex=new Image();
            var qfTex=document.createElement("canvas");
            var qfTexCopy=[];
            for(var i=0;i<qfTex.data.length;i++) qfTexCopy[i]=qfTex.data[i];
        if(canvas.qfTex==null||!canvas.isVisible()) return;
        var ctx=canvas.getContext("2d");
        var out=canvas.qfFrameBuffer.data;
        var t=~~(new Date().getTime()*canvas.qfSpeed);
        var compScale=canvas.qfCloseness*resScale*2;
        var xOff,yOff,yM,xM,txM;
        for(var y=0;y<canvas.height;y++){           
            for(var x=0;x<canvas.width;x++){
    var raf=function(){
            var newW=~~(canvas.clientWidth*resScale), newH=~~(canvas.clientHeight*resScale);

If you're reading this code, there are a few things you should know:

  • The sine function does not take an input between 0 and 2π, but between 0 and 629, which saves us from performing some divisions. sinLUT is also pre-multiplied by 16.
  • There is no interpolation anywhere. In fact, the code even disables texture filtering if possible to make sure the browser doesn't try to smooth the pixels when drawing the canvas. Because that's the way it should be.
  • All constants are tweaked so that the default settings look right.
  • Despite my best efforts, JavaScript is slower than super optimized x86 assembly written by John Carmack, a lot slower...

Now that the code is out of the way, let's draw this on a Canvas.

<!DOCTYPE html>
    <script type="text/javascript" src="lava.js"></script>
    <canvas id="demo" class="block"></canvas>
    <script type="text/javascript">
        quakeFluid("lava.png",document.getElementById("demo"),0.6,0.5,1,1,1); //start with "lava.png", scale=0.6, resScale=0.5, speed=1, intensity=1, closeness=1

Parameters of my implementation

These are the parameters for the quakeFluid function:

  • url: URL of the texture for the surface (will be loaded and used when ready)
  • canvas: The target canvas
  • scale: scale of the texture. Lower=larger
  • resScale: a parameter that I added, to set the rendering resolution scale. Lower=faster and more pixels. I like to keep this at 0.5 for extra pixely goodness
  • speed: animation speed. Can even go negative
  • intensity: animation intensity. Anything above 1.5 looks shitty
  • closeness: how close the observer is to the surface. Lower=further

The parameters are not constant, they are stored in the canvas element and you can change them:

  • qfSetTexture(url) method
  • qfScale variable
  • qfSetResScale(r) and qfGetResScale() methods
  • qfSpeed variable
  • qfIntensity variable
  • qfCloseness variable


Here's a video showing the effect with various textures and parameters.


You are free to copy, modify and use this code as you wish.

Share this article