Plasmation Update 5 – export and fluid troubles

This week was good and bad.

The good was that after a very quick researching session, I was able to easily figure out how to export a PNG from the data ion a canvas. Using three.js this was pretty straighjt forward (after scratching my head a little bit on how to do something explained online in three.js)

First, you need to actually set a flag (preserveDrawingBuffer) on the renderer being added to the canvas that tells it that it will keep the back-buffer’s data. For some reason, it normally gets cleared and when reading the data from the canvas, returns an array of 0s. 

renderer = new THREE.WebGLRenderer({preserveDrawingBuffer: true, alpha: true, autoClear: false});

The other 2 flags are to firstly tell the canvas that we will be able to see underneath it, (The checkerboard effect) and that it will not be cleared automatically (we need to preserve what’s been rendered because we need this data for use in the shaders.)

The next thins was simple using a javascript function called “toDataURL” which grabs pixel data from an image or canvas domElement (probably others too, I haven’t tried things like SVGs or custom fonts). I then turned this into a function that can be called on the “Plasmation” class (the class that holds all the system’s functioanlity like updating, initializing and rendering the canvas).

/**
 * @return string
 */
this.BufferData = function()
{
   return renderer.domElement.toDataURL("image/png");
};

I can then get the data returned from this, and simply insert it into the src attribute of an image element. So, instead of an image path on the server, it will contain a “compressed” data version of the image.

<img id="image_output0" src="data:image/png;base64,iVBORw0KGgoAAAANSUhED....." width="100" height="100">

(I’ve put “…” in there because that data would otherwise be about 112,000 characters long…).

The reason for using the base64 data method is because rendering on the canvas is a client-side job, and therefore I literally cannot get the URL of the image on the server unless I converted it, sent the data to the server, converted it back to image data using PHP, and then saved it to the server, just for the javascript to ping back at the server for the URL of this new image, and then insert that into the image as the src attribute. This seemed a bit ridiculous, so I simply made use of the data URL that sites like Google use for their search engine (I’m sure you’ve copied the source of an image from a Google image search thumbnail and it returning a stupid URL like the one in the code block above). However, one issue that may happen is the fact that my original project spec wants thumbnails of the simulation. This means I may have to do this anyway, but for a faster way, this works better anyway, and has less chance of error.

Shader loader

Looking at three.js tutorials, there seems to be a trend of people either putting shaders into an HTML element…

 <!-- a noob way of creating vertex shader code -->
 <script id="vertexShader" type="x-shader/x-vertex">

 void main() {

 gl_Position = vec4( position, 1.0 );

 }

 </script>

… or using an array of strings for each line of the shader…

fragmentShader: [

"uniform sampler2D tDiffuse;",
"uniform float amount;",
"varying vec2 vUv;",

"void main() {",
"vec4 color = texture2D(tDiffuse, vUv);",
"gl_FragColor = color*amount;",
"}"

].join("\n")

I looked at this and shook my head as both cases couldn’t use syntax highlighting and were mostly hard to use. I knew there must be a more usable way, for example, how you can just create “shader.glsl” in c++ and load the file.

It turns out, loading a file was kind of hard when you don’t have a web server, which is why people were opting for this method (it’s safer for beginners)… but I have a web server… So I created a shader loader function that uses a jquery ajax call:

/**
 * custom function that loads a shader
 * uses ajax to read a shader from the web server
 * @description returns a full string of the shader code
 * @return string
 */
this.LoadShader = function(url)
{
   return $.ajax({
      url: url,
      async: false
   }).responseText;
};

Then, to use the shader with a three.js material, I just need to create it like this:

var kernel = this.LoadShader("/shaders/kernel-vert.glsl");
var advect = this.LoadShader("/shaders/advect-frag.glsl");

 // create advect calculations shader
 // normPixelWidth, normPixelHeight, pixelUnit, pixelRatio;
 advectVelocityShader = new THREE.ShaderMaterial({
     uniforms: {
         pixelUnit: { type: "v2", value: pixelUnit },
         pixelRatio: { type: "v2", value: pixelRatio },
         scale: { type: "f", value: 1.0 },
         sourceSample: { type: "t", value: zeroTexture }, // texture 1 in, texture 1 out
         velocitySample: { type: "t", value: zeroTexture }, // texture 1 is used later and loops back to here
         deltaTime: { type: "f", value:     options.fluid.step }
 },
     vertexShader: kernel,
     fragmentShader: advect
 });

So basically, this allows me to use a single file for each shader, much like Jonas Wagner does with his fluid simulation. (I’ll mention him below).  

You’ll notice that the shaders are stored in variables first. I could simply stick the function call in the “vertexShader” part of the call, but I needed to use some of the shader calls in other three.js shaders, so it was neater to include them all before-hand.

The bad news

So my other task this week was needing to get the fluid simulation working. This, of course, was extremely difficult. 

After researching some examples online, I managed to find  a webgl fluid simulation by Jonas Wagner that uses shaders for the velocity, force, advection and jacobi etc. calculations and could run at real-time. (http://29a.ch/2012/12/16/webgl-fluid-simulation). I contacted Jonas and he gave me permission to use his shaders. I’ll get into what had actually been changed in a later post. Because I had to change a lot, and the project has to be 70% my own work. 

After about 2 full days of programming the shaders and little sleep, I managed to get three.js to first have a proper camera, render to a scene for each shader iteration of the fluid simulation, and figure out how to render-to-texture. 

The code for this implementation is too large to show here, so I’ll link the javascript file for the entire thing and you can check that out: http://plasmation.jamesheazlewood.com/js/editor.js

It basically boils down to the following set of steps:

  1. Create renderer context on canvas (described above)
  2. Create orthogonal camera
  3. Load textures to be used, eg the “heat” texture and initial velocity
  4. Load shader files with my ajax shader loader (mentioned above)
  5. Create all needed render target textures “velocityTexture1”, “velocityTexture2”, “divergenceTexture” etc.
  6. Create a scene for each shader pass (6 total)
  7. Create a mesh for each shader pass (6 total)
  8. Apply shaders to the meshes and add them to the scenes for rendering
  9. Position meshes so that they are in the exact centre of the camera and fill up the camera frustum.
  10. Call the render function
  11. In the render function, render each scene 1 by 1, making sure to export to the correct texture.
  12. Where needed, send that texture to the next shader for morphing. 
  13. Repeat the render function 60 times a second. 

The texture I used for the initial velocity of the fluid was something like this:

20141112163552

I used the red and green channels as the velocity of the particles. Just as a test for now as in reality, a texture like this would not be able to represent negative velocity as I could only ever use a number between 0 and 1, where 1 is the brightest colour (255) of the Red or Green channel. For the final version, I’ll have to make it entirely dark yellow (127 red, 127 green) and any velocity over this or under this is negative or positive velocity. 

Once all this was set up, the next step was to make sure it all worked. And it didn’t. And because I didn’t fully know how three.js did things behind the scene and what flags were set in webgl (three.js makes it easier to make normal 3D scenes, but that’s not even close to what I’m trying to do.). 

The render code iterates over each frame, and looks like this:

// 1
advectVelocityShader.uniforms.sourceSample.value = velocityTexture1;
advectVelocityShader.uniforms.velocitySample.value = velocityTexture1;
advectVelocityShader.uniforms.deltaTime.value = 0.01666;
renderer.render(velocityScene1, camera, velocityTexture2, false);

// 2
addForceShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(velocityScene2, camera, velocityTexture2, false);

// 3
divergenceShader.uniforms.velocitySample.value = velocityTexture2;
divergenceShader.uniforms.deltaTime.value = 0.01666;
renderer.render(divergenceScene, camera, divergenceTexture, false);

// 4
jacobiShader.uniforms.divergenceSample.value = divergenceTexture;

// set up vars for jacobi shader loop
var originalPressure0 = pressureTexture1,
 originalPressure1 = pressureTexture2,
 pressureSwap = originalPressure0;

// this iterates over the jacobi shader (stores texture data and then swaps to another one
// so that the new data can be stored in the second, and then swapped back)
for(var i = 0; i < options.fluid.iterations; i++)
{
 // update shader this iteration
 jacobiShader.uniforms.pressureSample.value = originalPressure0;
 renderer.render(pressureScene1, camera, originalPressure1, false);

 // swap textures around
 pressureSwap = originalPressure0;
 originalPressure0 = originalPressure1;
 originalPressure1 = pressureSwap;
}

// 5
pressureShader.uniforms.pressureSample.value = pressureTexture1;
pressureShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(pressureScene2, camera, velocityTexture1, false);

// 0 render visualiser
particleShader.uniforms.deltaTime.value = 0.01666;
particleShader.uniforms.pressureSample.value = pressureTexture1;
particleShader.uniforms.velocitySample.value = velocityTexture1;

The “false” flag in each render was a “clear screen” flag, which I didn’t want because I wanted to remember the previous values on that texture for the next iteration.

This took a lot of effort to go over the shader tutorials and try and convert them into three.js functionality. But then, it was finally complete. And the result was:

20141111230951

Once I finally figured this out over the space of about 2 days (something to do with the uniforms in the shader already existing and they were being bound to the “fog” uniform, even though I wasn’t binding them to anything to do with fog. I assume it was a uniform location issue with webGL rather than javascript). 

But then once I fixed it, the shader was not even updating over time, but rather just appearing as the first frame.

20141112200633

Eventually I added a debugger that displayed certain textures, and I noticed that the second iteration of the velocity was actually working, but in the next render, it was being reset.

The second velocity iteration. Something happened. But this is as far as it goes.
The second velocity iteration. Something happened. But this is as far as it goes.

I suspected that the textures being rendered to the scene could not contain negative values, and therefore pressure would not have been working correctly. I also noticed that the jacobi loop was not doing anything to the textures. So it was either something to do with the textures being reset every frame, or the scenes losing their information once another scene is rendered. This may be because the scenes are rendered to a texture and this texture may have been a single one in memory. It could have also been because the textures could not be modified in memory after being initially set. However I had no control over either of these, so I had to figure out a different way of approaching the render cycle.

The amount of frustration involved with this was incredible, as I didn’t know where to even start to find a solution. There were no fluid simulations similar to this online that used three.js, which is what I needed to use for this.  I also didn’t know many people that could help.

 

I’ll get into how I overcame this in the next post.  

Leave a Reply

Your email address will not be published. Required fields are marked *