While I’m here, I should get into how I managed to get the exporter working.
My first atttempt was to change the renderer so it would render to a bunch of textures stored in an array. This worked well and it allowed me to use a JQuery slider to scrub through the exported frames. However, because all these images were stored in web.gl buffers, it was hard to get the data from them. Another issue was that it was using an enormous amount of GPU memory as the webGL contect would crash multiple times when exporting more than 100 or so images (I guess storing over 100 512×512 textures is a bad idea…)
My next attempt was to simply use the canvas to render. The way I did this was just render the animation to the canvas the way way I had been before, except implement things like frame skips and allow the user to control whether it has opacity or not.
This was fairly easy, apart from mutliple bugs with Jquery (I ended up just using the context straight from javascript).
The frame constructor worked by rendering a frame to the main canvas, and then using a different 2D canvas, draw this frame on top of what was already on this new 2D canvas ad a proper location. This was easy as I could choose the scaling of the image as well (although it kind of sucks, but I want to wait and see how it looks once I add the lens blur effect to prevent that noisy velocity thing).
Below is the code for creating the sprite-sheet client-side:
plas.Reset();
plas.RefreshShaders();
plas.options.viewMode = 1;
plas.Render();
plas.ticks = 0;
var image;
var canvas = document.getElementById('image_compile');
var context = canvas.getContext('2d');
var scale = plas.options.scale;
var size = (512 * scale);
var frameCount = plas.options.frames;
var h = Math.round(frameCount / plas.options.horFrames);
canvas.width = size * plas.options.horFrames;
canvas.height = size * h;
context.globalAlpha = 1.0;
var rFrame = 0;
var x, y;
while(rFrame < frameCount)
{
if(plas.Render())
{
x = rFrame % plas.options.horFrames;
y = Math.floor((rFrame - x) / plas.options.horFrames);
image = new Image();
image.src = plas.BufferData();
context.drawImage(image, x * size, y * size, size, size);
rFrame ++;
}
}
$("#image_output").attr("src", canvas.toDataURL("image/png"));
Once the frames are rendered to the new canvas, it gets all the data from this canvas and converts it to a data url (base64 encoded thing, the same as I already had) and puts that in an image ready for exporting. The user can then just save the image to their hard drive.
I noticed that the you can save a canvas as an image in Firefox and Chrome, which meant I didn’t even need to create an image from the canvas data, however, although Internet Explorer 11 ran the simulation really well, it couldn’t save canvas data as an image. This meant that I had to convert it to an image.
The next step was then going through all the set-able options in the scene and giving them controls. I used a slider because I find it requires less clicks and typing to just move a slider back and forth. I added titles to these sliders and then got it to also draw their values underneath.
I managed to get so stuck with the fluid simulation that I searched online for GPGPU examples, and found this. It was a really well-written example on how to use the RGB channels of a “velocity” texture for the XYZ co-ordinates of objects on the screen, and have it actually work in three.js.
So I had to painstakingly go through each line of code and try and figure out a way to implement it into my own fluid simulation program, and without having to delete too much of my existing code.
The major differences I found with this method compared to my original method were firstly, the mesh used to render the texture from each shader pass was the same mesh, and instead of adding this mesh to a scene per render pass, it just created one single scene for rendering the fluid on this 1 mesh and then just SWAPPING the texture used in the mesh with the pre-defined textures.
After a few bug fixes and frustration the result was:
Looks better than a single frame! Although there’s no accumulation of textures again, as it is simply replacing the velocity from the previous iteration with the initial velocity, and then not using the divergence shader as it should be.
From further investigation (and another entire day of banging my head from lack of debug ability with such a complex system), I managed to figure out that each texture over-written after it enters the next shader. This meant that once the velocity texture from the first iteration entered the “Add velocity” shader, it was being over-written with the new velocity and then the wrong values are sent to the divergence shader. To fix this, I tried adding a new texture to the system to use for adding velocity onto the velocity texture from the first pass…
Success!
Here you can see the velocity X (red) the velocity Y (green) and the pressure (blue).
I then created a texture that I could use to sample the colour based on the velocity of the fluid:
And then rendered these colours based on the pressure and some trial-and-error and fixing a weird bug where the colours would go completely white when past the usual 1.0 value of the texture (caused by the texture looping back to the start of the image), the results were interesting:
I sent it to someone and they said that it reminded them of a pool of lava. (A pity I didn’t record a video of it because it looked really smooth and mesmerizing). But I wanted it to look like an explosion.
After more tweaking, I got the fluid to fade out to 0 when there was little-to-no pressure:
I then did some polishing and included more colours in the “heat” texture to get more fun colours of the flames:
There was still a few problems (minus an issue to do with the velocity texture I originally had and trying to figure out why taking 0.5 from it, it wouldn’t halve shift the velocity, but I won’t get into that):
The render can reach the edge of the screen, getting cut off which is an undesired result for a sprite.
The edge of the screen was prone to looping back and causing the fluid to exponentially fling out of the edge.
The effect lasted too long for a sprite-sheet (about 100 frames!).
Velocity that is too highly concentrated in one area creates a noisy area, when it should look completely white (caused by the shaders only being able to calculate pixels either side of one another, not regarding the rest of the effect)
The first problem I solved by introducing a terxture I could multiply the blending level of the heat texture:
With this being sampled per pixel, I was able to adjust the flames so that if they trail too far from the centre, it would gradually shove the colour being sampled from the heat texture down the right-side, causing the effect to seem to naturally burn out the further away from the center it got:
To prevent chaotic bleeding of velocity, I added another sample texture, this time to the “add velocity” shader, which basically has a region around the outside of the image that can be multiplied by the velocity to 0 off the velocity around the edges:
The “add velocity” shader now looks like this (notice the edges are darker):
This method was sort of like a cheap hack, but because I’m not too worried about the frames-per-second, and the fact that I have the circle texture limiting the drawing of the fluid anyway, it works well for the situation.
For the effect lasting too long, I introduced a “time” variable where it would start at 1.0 and decrease by a small amount each frame. Once it reaches 0.0, the effect has been faded out completely. I multiply this number by the effect, much like the circle from the first issue above.
I’ll have to figure out the other issues at another time as the project has already lasted too long and the effect is starting to look really good. Here’s a video:
Some ideas for the remaining solution is with the noisy look, I could apply a lens blur effect:
Although this removes some of the fine details of the more orange flames, so I’d have to try and apply it to only brighter areas somehow.
I also managed to accidentally create a “brightness” effect to the final render while trying to figure out the “life” variable (the time that it simulates before it fades out), which I left in and added a uniform for it, this is because I planned to include that effect in the original design anyway.
For the time-being, the final shader code for actually rendering the effect looks like this:
uniform vec2 pixelUnit;
uniform float deltaTime;
uniform float life;
uniform sampler2D heatMap;
uniform sampler2D pressureSample;
uniform sampler2D velocitySample;
uniform sampler2D roundMap;
uniform float heatMapRow;
uniform float brightness;
varying vec2 vuv;
// represents which colour gradient our effect will use
const float numberOfHeatRows = 8.0;
// main program
void main()
{
// get velocity as a unit per pixel
float speed = sqrt(
texture2D(velocitySample, vuv).x * texture2D(velocitySample, vuv).x +
texture2D(velocitySample, vuv).y * texture2D(velocitySample, vuv).y
);
// calculate the spot on the UV map of the heat texture we want to use
float heatV = 1.0 - (heatMapRow / numberOfHeatRows);
// calculate heat based on circular center around center of image
float round = texture2D(roundMap, vuv).x;
//
float heatAmount = (2.0 - speed - round) * clamp(life, 1.0, 10.0);
// position on the heat map "amount of heat"
vec2 heatPos = vec2(heatAmount, heatV);
// get colour from heat map
vec4 color = texture2D(heatMap, heatPos) * brightness;
// final colour
gl_FragColor = vec4(
color.r * color.a,
color.g * color.a,
color.b * color.a,
color.a
);
}
The good was that after a very quick researching session, I was able to easily figure out how to export a PNG from the data ion a canvas. Using three.js this was pretty straighjt forward (after scratching my head a little bit on how to do something explained online in three.js)
First, you need to actually set a flag (preserveDrawingBuffer) on the renderer being added to the canvas that tells it that it will keep the back-buffer’s data. For some reason, it normally gets cleared and when reading the data from the canvas, returns an array of 0s.
renderer = new THREE.WebGLRenderer({preserveDrawingBuffer: true, alpha: true, autoClear: false});
The other 2 flags are to firstly tell the canvas that we will be able to see underneath it, (The checkerboard effect) and that it will not be cleared automatically (we need to preserve what’s been rendered because we need this data for use in the shaders.)
The next thins was simple using a javascript function called “toDataURL” which grabs pixel data from an image or canvas domElement (probably others too, I haven’t tried things like SVGs or custom fonts). I then turned this into a function that can be called on the “Plasmation” class (the class that holds all the system’s functioanlity like updating, initializing and rendering the canvas).
I can then get the data returned from this, and simply insert it into the src attribute of an image element. So, instead of an image path on the server, it will contain a “compressed” data version of the image.
(I’ve put “…” in there because that data would otherwise be about 112,000 characters long…).
The reason for using the base64 data method is because rendering on the canvas is a client-side job, and therefore I literally cannot get the URL of the image on the server unless I converted it, sent the data to the server, converted it back to image data using PHP, and then saved it to the server, just for the javascript to ping back at the server for the URL of this new image, and then insert that into the image as the src attribute. This seemed a bit ridiculous, so I simply made use of the data URL that sites like Google use for their search engine (I’m sure you’ve copied the source of an image from a Google image search thumbnail and it returning a stupid URL like the one in the code block above). However, one issue that may happen is the fact that my original project spec wants thumbnails of the simulation. This means I may have to do this anyway, but for a faster way, this works better anyway, and has less chance of error.
Shader loader
Looking at three.js tutorials, there seems to be a trend of people either putting shaders into an HTML element…
<!-- a noob way of creating vertex shader code -->
<script id="vertexShader" type="x-shader/x-vertex">
void main() {
gl_Position = vec4( position, 1.0 );
}
</script>
… or using an array of strings for each line of the shader…
I looked at this and shook my head as both cases couldn’t use syntax highlighting and were mostly hard to use. I knew there must be a more usable way, for example, how you can just create “shader.glsl” in c++ and load the file.
It turns out, loading a file was kind of hard when you don’t have a web server, which is why people were opting for this method (it’s safer for beginners)… but I have a web server… So I created a shader loader function that uses a jquery ajax call:
/**
* custom function that loads a shader
* uses ajax to read a shader from the web server
* @description returns a full string of the shader code
* @return string
*/
this.LoadShader = function(url)
{
return $.ajax({
url: url,
async: false
}).responseText;
};
Then, to use the shader with a three.js material, I just need to create it like this:
var kernel = this.LoadShader("/shaders/kernel-vert.glsl");
var advect = this.LoadShader("/shaders/advect-frag.glsl");
// create advect calculations shader
// normPixelWidth, normPixelHeight, pixelUnit, pixelRatio;
advectVelocityShader = new THREE.ShaderMaterial({
uniforms: {
pixelUnit: { type: "v2", value: pixelUnit },
pixelRatio: { type: "v2", value: pixelRatio },
scale: { type: "f", value: 1.0 },
sourceSample: { type: "t", value: zeroTexture }, // texture 1 in, texture 1 out
velocitySample: { type: "t", value: zeroTexture }, // texture 1 is used later and loops back to here
deltaTime: { type: "f", value: options.fluid.step }
},
vertexShader: kernel,
fragmentShader: advect
});
You’ll notice that the shaders are stored in variables first. I could simply stick the function call in the “vertexShader” part of the call, but I needed to use some of the shader calls in other three.js shaders, so it was neater to include them all before-hand.
The bad news
So my other task this week was needing to get the fluid simulation working. This, of course, was extremely difficult.
After researching some examples online, I managed to find a webgl fluid simulation by Jonas Wagner that uses shaders for the velocity, force, advection and jacobi etc. calculations and could run at real-time. (http://29a.ch/2012/12/16/webgl-fluid-simulation). I contacted Jonas and he gave me permission to use his shaders. I’ll get into what had actually been changed in a later post. Because I had to change a lot, and the project has to be 70% my own work.
After about 2 full days of programming the shaders and little sleep, I managed to get three.js to first have a proper camera, render to a scene for each shader iteration of the fluid simulation, and figure out how to render-to-texture.
It basically boils down to the following set of steps:
Create renderer context on canvas (described above)
Create orthogonal camera
Load textures to be used, eg the “heat” texture and initial velocity
Load shader files with my ajax shader loader (mentioned above)
Create all needed render target textures “velocityTexture1”, “velocityTexture2”, “divergenceTexture” etc.
Create a scene for each shader pass (6 total)
Create a mesh for each shader pass (6 total)
Apply shaders to the meshes and add them to the scenes for rendering
Position meshes so that they are in the exact centre of the camera and fill up the camera frustum.
Call the render function
In the render function, render each scene 1 by 1, making sure to export to the correct texture.
Where needed, send that texture to the next shader for morphing.
Repeat the render function 60 times a second.
The texture I used for the initial velocity of the fluid was something like this:
I used the red and green channels as the velocity of the particles. Just as a test for now as in reality, a texture like this would not be able to represent negative velocity as I could only ever use a number between 0 and 1, where 1 is the brightest colour (255) of the Red or Green channel. For the final version, I’ll have to make it entirely dark yellow (127 red, 127 green) and any velocity over this or under this is negative or positive velocity.
Once all this was set up, the next step was to make sure it all worked. And it didn’t. And because I didn’t fully know how three.js did things behind the scene and what flags were set in webgl (three.js makes it easier to make normal 3D scenes, but that’s not even close to what I’m trying to do.).
The render code iterates over each frame, and looks like this:
// 1
advectVelocityShader.uniforms.sourceSample.value = velocityTexture1;
advectVelocityShader.uniforms.velocitySample.value = velocityTexture1;
advectVelocityShader.uniforms.deltaTime.value = 0.01666;
renderer.render(velocityScene1, camera, velocityTexture2, false);
// 2
addForceShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(velocityScene2, camera, velocityTexture2, false);
// 3
divergenceShader.uniforms.velocitySample.value = velocityTexture2;
divergenceShader.uniforms.deltaTime.value = 0.01666;
renderer.render(divergenceScene, camera, divergenceTexture, false);
// 4
jacobiShader.uniforms.divergenceSample.value = divergenceTexture;
// set up vars for jacobi shader loop
var originalPressure0 = pressureTexture1,
originalPressure1 = pressureTexture2,
pressureSwap = originalPressure0;
// this iterates over the jacobi shader (stores texture data and then swaps to another one
// so that the new data can be stored in the second, and then swapped back)
for(var i = 0; i < options.fluid.iterations; i++)
{
// update shader this iteration
jacobiShader.uniforms.pressureSample.value = originalPressure0;
renderer.render(pressureScene1, camera, originalPressure1, false);
// swap textures around
pressureSwap = originalPressure0;
originalPressure0 = originalPressure1;
originalPressure1 = pressureSwap;
}
// 5
pressureShader.uniforms.pressureSample.value = pressureTexture1;
pressureShader.uniforms.velocitySample.value = velocityTexture2;
renderer.render(pressureScene2, camera, velocityTexture1, false);
// 0 render visualiser
particleShader.uniforms.deltaTime.value = 0.01666;
particleShader.uniforms.pressureSample.value = pressureTexture1;
particleShader.uniforms.velocitySample.value = velocityTexture1;
The “false” flag in each render was a “clear screen” flag, which I didn’t want because I wanted to remember the previous values on that texture for the next iteration.
This took a lot of effort to go over the shader tutorials and try and convert them into three.js functionality. But then, it was finally complete. And the result was:
Once I finally figured this out over the space of about 2 days (something to do with the uniforms in the shader already existing and they were being bound to the “fog” uniform, even though I wasn’t binding them to anything to do with fog. I assume it was a uniform location issue with webGL rather than javascript).
But then once I fixed it, the shader was not even updating over time, but rather just appearing as the first frame.
Eventually I added a debugger that displayed certain textures, and I noticed that the second iteration of the velocity was actually working, but in the next render, it was being reset.
I suspected that the textures being rendered to the scene could not contain negative values, and therefore pressure would not have been working correctly. I also noticed that the jacobi loop was not doing anything to the textures. So it was either something to do with the textures being reset every frame, or the scenes losing their information once another scene is rendered. This may be because the scenes are rendered to a texture and this texture may have been a single one in memory. It could have also been because the textures could not be modified in memory after being initially set. However I had no control over either of these, so I had to figure out a different way of approaching the render cycle.
The amount of frustration involved with this was incredible, as I didn’t know where to even start to find a solution. There were no fluid simulations similar to this online that used three.js, which is what I needed to use for this. I also didn’t know many people that could help.
I’ll get into how I overcame this in the next post.
Got the canvas to render a basic scene with a cube
What up
So, I enjoy writing these journals because it lets me explain things to a website, which helps figure things out somehow.
Anyway.
More website work
The majority of the work this week was sorting out the website templates and getting the users to be able to create an account, login and see their scenes. This was quite easy due to my framework (the “Corn” framework). I also added it all to BitBucket for version control with git.
Some issues I had were purely me being an idiot, but still issues and they still take up time. The first one was me trying to use DBO methods of saving the scene once a user created it. I was trying to save about 12 fields at once and it was causing an exception. I realized it was just because I had spelled the last field differently to the one in the sql preparation query.
The second issue was with trying to get jquery to initialize the three.js canvas by using $(“#canvas”).appendChild, which should have worked, but apparently jQuery has its own way of doing this which is just ().append. Nice work.
I managed to get the canvas working in the end and three.js to initialize a cube. Which is a good start. The next step is getting the shaders running. Looking around the net, I found a webGL fluid simulation example by Jonas Wagner where he uses IBOs etc. and texture maps to save things like pressure and velocity for the pressure. I could go through these for some ideas on ways to implement my own type of fluid simulation that includes a pixel shader for the “heat” of each cell which will be based off of a gradient of colours over heat level. This will allow the users to do whatever colour they want based off of these colour gradients.
Here’s some screenshots (the interface still needs styling up, but that should be done after I get the fluid simulation and exporting done.)
More redesign
I’ve had some thoughts and I still can’t decide on the format of the data in the database. On one had I want to be able to literally post the javascript object data straight to the database like a json string, which would be faster and easier. But on the other hand, this is both insecure and causes the data to be harder to read once saved out to the database and may prove to be hard to debug in the future. it also poses the risk of the data becoming corrupt if I screw up the saving functions at any stage.
After contemplating a lot, the way I am leaning towards at the moment is having the javascript in the editor post json queries to the server that do specific things like “update resolution” or something more compact like “save scene properties”. This would be good because I can just compact and send the needed variables to the server to update only when they need to be. This lets me remove the “save” function, and only leave the “save as” function because the scene will technically be automatically saved.
The downfall of this is that I would need to write many functions that get and set this data, and these will all need to be written, added security, and tested in the test phases which need to be completed this week. I think there should be a fair bit of copying and pasting for most of it, so I could turn a lot of the repetitive functionality into functions.
An issue that has cropped up during these redesigning parts is the fact that a simulation will not have much randomness in the particle generation initially. This means it would be a good idea to allow the user to “paint” on the areas where pressure and velocity will start at in the simulation, otherwise it would look very round and boring. I’ve added another tool on the toolbar for letting the user paint on the layer / emitter the direction that the fluid pixels will have from the first frame. This will also give the user more freedom.
Facebook page
Pretty straightforward: created a page, added a picture, and linked it to the footer of the app. I might need to make a Google+ account and something else, like Tumblr, YouTube (for Tutorials), or Pintrest.
Trello
I’ve also started adding things to Trello, making sure I have tickets for everything and that they go into the correct column when done and tested.
What’s next?
The first thing I want to get working is the exporting ability. THis will require somehow grabbing the pixels out of the canvas element and saving them as a 32-bit PNG.
In the next 2 weeks or so, I will be getting the fluid system into the application. This will require researching some fluid code done with webgl and implementing it with three.js. Looking at some demos online, this should be fairly straightforward.
This week I focused on getting the design documentation and technical documentation complete for the project.
Database design
Some things I ran into was that because the data for the online tool will be saved in a database, there needs to be a decision to what format the data is saved in. The 2 choices were to store things in the database as their own fields, for example “camera_x” or “tint_red” “tint_blue” etc. But some thing would need to be relational and in second normal form. This occurs in the curves that happen over time. Things like the colour of the effect transitioning over time need about 4 numbers per node in the curve editor. This would cause an indeterminate size of data for each scene. Therefore, after some research, I’ve decided to just make numbers saved per frame without curves. This means the database will have a table called “keyframes” and these will all belong to certain scenes. They will contain a number for which frame it is, certain colour and other data, and in the app they will simply transition in a normal “lerp” (linear interpolation). Future versions could have curve types like “easing”. But I don’t have time to make it more complex.
New interface
Because of this new keyframe method, I had to re-make the design documentation and change the example screenshots.
As you can see, the interface now does not have any tabs for “workspace” and “export” view, but rather you simply see what the finished product will look like. Each layer will be rendered to a graphics buffer and caches for easy scrubbing through the timeline. The 3rd button on the bottom-left of the time-line is an “add keyframe button” which will add a keyframe to all layers.
The reason I didn’t want to combine the layers into a single buffer is because the fluid simulation would otherwise need to be updated for all layers rather than just 1. This also allows transformation of layers individually without having to re-render the images underneath.
Further along this week, I will be putting the fluid simulation in the tool and set up the javascript framework for it to render to buffers.
Straight into it. So far, I’ve managed to get a web server up and running and set up an environment to view the particle simulation.
So far, so good.
Some things to take note here is that this is heavily based off the Cornstar framework. However, I’ve done some modifications to the framework to make it more modular like moving classes into different files and separating Plasmation-specific functions into their own set of files. The folder structure can be seen here in PHPStorm:
The benefit of this is that I could use this framework in other project, simply by keeping it in its own repository. If I need up update any of the framework, I can submit it to the database and pull changes straight from that into other projects.
After all this I found that my time management was a bit off when the reality was: I’ll be creating a full-on 3D Navier Stokes formulated particle simulation. This was a bit silly as the final result was going to be 2D anyway, and the initial plan had changed to not include a camera in a 3D world any more. This means I may as well just do a 2D Navier stokes simulation using code from multiple places on the net.
Finding code for the simulation
After deciding that it’d be easier to do 2D fluid simulations, I went back and looked at some of the simulations that use webGL:
I’ll be looking at these in detail and determine which is best to use for my situation. Most of them were mentioned in my research journal, but only as examples.
Firstly, I’ve changed the category on my 2014 projects menu to be “Plasmation”, but it’s seemed to remove it from all my old posts for some reason. Anyway…
Framework
After last semester creating Cornstar, and working on Game Faming, I’ve begun to realize that CakePHP is more annoying to use than it is useful. Don’t get me wrong, it’s a good framework, and its helpers and security are great. The issue I have with it is that all my assumptions of what certain things do when typing them are usually wrong, and it takes longer to get something stupid working by searching Google than it would be if I created the framework myself and knew everything about it.
So, I’ve decided to use the MVC framework I developed for Cornstar instead of CakePHP. I received pretty good grades for it, so It can’t be all that bad. The advantage of this is that I know exactly how it works, and it stops me hitting any brick walls with knowing or having to search up how to properly use something in CakePHP.
A disadvantage of this is of course, if I get stuck with something I can’t look it up. However, I know more about PHP its self than the CakePHP framework, so if I get stuck on any PHP treated thing I’d have to look them up. Luckily the PHP community is enormous. All and all I think switching to my own framework is the best option.
The Framework
So deciding to use the Cornstar framework, I will also need to be updating it, so it can be used in other programs. Ther way it is now is that some code in the main library of the framework is Cornstar specific, meaning to make the framework modular, I’d need to re-organise some parts, separating things that will not change. To achieve this, I will simply move the program-only files into a folder called “corn”. After looking around, the only reference to “corn” in the Linux world is one at some university. Ideally, if I decide to make my framework public, it’d need to be able to be installed via “apt-get install corn” for example. Therefore I will simply be calling the framework “Corn”. This is because it is based off Cornstar and a corn on the cob typically has a nice juicy golden array of kernels, which could represent the functions and classes of the framework. Also, who doesn’t love corn?
Tracking goals
After doing a big fat “todo” list for this project, it made me realize how important setting goals and giving them achievable time is. many times I’ve sat around stressing and trying to reach for the main goal of “project finished” but without smaller steps, it just feels like it’s unfinished ALL the time. It’s like trying to climb to the top of something without a ladder; the ladder has rungs, which can be treated as baby steps towards an ultimate goal. It’s also important to set a time or date for which something should be complete to stop feature creep and achieving the feeling of “done this on time” at the right time. The other night I made a list of things to do and a time limit in which I was to do them. I couldn’t meet the time so I kept pushing it forward but it helped me track how long I was actually spending on something which helps set reasonable time for the future. Programming time flies. So fast. Maybe because I enjoy it?
I think what I’m trying to say here is basically: break tasks up and set a (reasonable) time limit to do them. It makes it a lot easier to manage and track time. Tracking time on things helps improve estimations on things. This means I can charge people more and not be late on projects.
Initial plan
The initial plan for this is to first set up the localhost, set up databases and shiz, and then get a demo of the page layout and interface up and running. Another thing I want to do is to create unit tests in the actual application. This will help with debugging and make it easier to test things once another feature of bug is fixed. This is because commonly, when something in the code has changed, it causes something else to break.
Tests
The way the unit tests will work is I’d set up a controller called “TestsController”, and it will simply contain a view called “All”. When going here (/tests/all) it will show test results. The tests will be for every component added by me. This benefits in many ways including making it easier to make sure everything is working on the slightly different live Unix server (compared to local Windows environment), and helps in re-factoring code. This Wikipedia article gives a good example about the design of unit testing that I want to try and achieve.
To make tests easier, I thought of a possible database idea where there will be 3 databases on both localhost and the live server, one will be for the real data, and the other for tests. The test data will be manually removed and re-entered at the end of the testing. The reason why there’s 3 is because there needs to be a database that can have data wiped and added to from another database. SO there will be the live data, test data template (with sample entries) and the actual test data (that is copied from the template). Using MySQL allows me to copy data to a database from another one, but does not allow me to create a database using pure scripts, this is why there needs to be a spare database.
The issue with the above is that I’d have to maintain 2 databases, having to re-enter test data every time I wanted to test it if the database changes. This should be okay, however, if the database doesn’t change much.
After some research, I can see that this sort of testing is a feature of test driven development development methodology (TDD) where tests are written first and then specific results for each test need to be met in order for it to pass. The important part of TDD is the specifications. Therefore these must be made and use case diagrams should be done for most of them (the complex ones). This meas that perhaps the development methodology for this project could be TDD.
Some test ideas:
Create account username field:
isBlank – checks to see if a string is blank
isOneChar – checks to see if string is a single character
startsWithNumber
etc.
Other development methodologies
Although, TDD seems like a good one to use, I am still in the process of researching others, and therefore I may change my mind of which one to use. But if I don’t, then using TDD in this project will be a good experiment, as long as I record results.
Design changes
As for the design documentation, I didn’t really change anything apart from the way the camera is set up. Thinking of Photoshop and Flash, they both don’t really have a “camera” of sorts, but instead just clunky ways to change the object size in the scene. I think for Plasmation I’m going to include a camera that acts like both a camera in a 3D editing software and Photoshop. The way this will work is the camera can be moved around in a sort of “camera mode” (holding right mouse button in the scene) and using WASD to move like a FPS game (and how Unity does it) except you won’t have control over the camera in another view, the view will ALWAYS be the camera. This saves on having to have the 3 tabs at the top of the scene (edit mode, scene, side-by-side). However it makes the design different as the user will now not be able to control the camera using a scene edit mode or anything.
This week I focus on getting peer reviews. This means that anyone reading this is free to make comments, suggestions and constructive criticism.
I’ve been posting the same thing in multiple places and it is as follows:
I’m posting to ask for feedback and opinions from artists in the games industry and hobby game artists for a tool I am developing for particle effect creation. This is not advertising or anything to buy my product, but a friendly hello and some opinions for a game programming degree assignment.
The background is that there’s not many programs around that allow someone to easily create realistic sprites for their game’s particle system. Games these days seem to have nicer looking particles and engines are powerful enough to render animations and other warping effects. These sprites would require advanced animation programs like FUME FX and similar, After Effects, and hours spent in Photoshop.
In addition to this, I find in tools like FUME, there’s a giant waiting time between setting up the effect and then having to render it all, only assuming how it will turn out in the end. Why do this when graphics cards are powerful enough to do almost all of this rendering? Look at game engines today like Unreal Engine 4. They are looking more and more realistic and this is because of graphics card power.
So the tool I’m proposing is going to be an extremely simple tool that lets you paint particles of your choice of properties for smoke, fire or magic effects. It will then allow you to animate them by giving the particles random velocities or adding nodes that repel or draw in the particles to customise slightly. You can then export the animation as a sequence in a PNG ready for engines like Unity.
The program will also be browser-based and use WebGL. One of the main focuses is going to be a live preview of the effect by utilizing the graphics hardware on the user’s computer. This will allow most of the effect to be rendered instead of it being represented as white dots and running at 1 frame-per-second. My goal is to have it run at 60 fps, but will settle for 30.
So for example, you want a nice explosion animation and can’t afford stock animations or want something more personalised, you can open the website (browser based), create a particle type like fire, give it some velocity ranges, rotations, etc, then create a 10-million or so burst that you can step back and forth through and alter as you see fit. You can then add another particle type like smoke, and create a burst. You can then export it.
You may be thinking “why not just use FUME FX? It’s better and looks great”. Well, you still can. Basically. The main differences with this tool though is that it’s aimed at game artists as FUME tends to be more of a video exporting tool. It also removes the huge interface of Maya or 3DS Max with all the tools that you probably won’t need. All you really need is a center point and a particle simulation and export.
As stated, this is not just an idea, I’m going to be creating this, and have to for my degree, so any feedback is greatly appreciated. If you have any constructive criticism, let me know.
Some things I do want to know from you guys is:
What are the best features of Fume FX? What are the things that you cannot live without?
Do you find the interface overwhelming?
Are all options used or are there settings that are mostly used and some that should be default anyway?
Would you pay for software like this particle tool idea?
Do you think having it run in a browser would be beneficial? Why / why not?
What are the hardest special effects to create?
What features do you wish were there / were easier?
Would a GUI speed boost make workflow a lot easier? (live preview instead of rendering)
What sort of interface would work better? Is one like Photoshop a good choice? Would an interface that goes away from the trends of Photoshop be a burden?
Any other feedback.
So for anyone that reads this and finds it interesting, please leave your feedback here or on the forums. The forums are as follows:
This week I worked on some preliminary research on how fire and smoke look in reality and what sorts of effects are possible with real-time rendering today. I wanted to do this so I knew what was possible to do before I plan to do something that perhaps couldn’t be possible. I started with looking at what fire actually is, and according to HowStuffWorks.com and about.com [HowStuffWorks.com Fire] [about fire] it is basically the energy (light and heat) created, which is usually called “combustion”. So the fire you see is actually bright carbon atoms.
[fire imges]
Looking at the images above, we see the shape of fire closely resembles the particle simulation I worked on except you can see the individual particles in the simulation. This is because, in reality, the particles are at the atomic level, and therefore there needs to be many many more particles in the simulation to get it to look like real fire. Also, in the simulation, each particle is represented as a 1×1 pixel dot on the screen, so further away particles will need to be smaller and closer ones be bigger. I tested to see how small I would need to make a large export of a screenshot and the result was that it varied depending on the density of the cloud. For example, with lots of spaced-apart particles. As a good mid-point to this, I decided that one could shrink the cloud down to 20% and it blended the particles together to appear close to a real fire effect.
[image of particle shrunk down]
I tested it in Unity with a fire effect and it looked okay after a bit of adjustments to the obscure shape.
[images of fire in Unity game]
Another test I did was to try and find some stock image effects in Adobe Fireworks that I could apply to the image like a glow effect or blur effect, but neither improved the image.
[images of blurred particle fireworks screenshot]
I will need to find some special image effects to see how they look. One that could improve the look of the cloud would be an effect that blends close particles together as large round balls, similar to a lens blur but keeping the solid edges of the particle. This may be impossible, but it would be good to test anyway.
This week I focus on research and creating a schedule for the creation of the tool. I figure that without research, I might not be able to accurately produce a workable schedule. We’ll start with the documentation creation process and how long this should take:
Start date: 12th February 2014
Due date: 28th May 2014
This in total is 15 weeks to do documentation, with a presentation on the due date.
Week 1 to 5
Discuss and research initial ideas
Week 6
Finalise Preliminary Idea
Week 7 to 12
Research project and document drafting
Week 13 to 14
Final Drafts
Week 15
Final Documentation and Publishing
Week 16
Submission
Since I already have the initial idea, I’m going to start on the actual project and document drafting. Below is a plan of what I’ll need complete by the end of the planning stage:
(W-7) 26th March – Create a plan. Then Research some more effects and specifics about realistic particle simulation.
(W-8) 2nd April – Make a start on the documentation, making the templates for each document and make lists of the parts I’ll be doing.
(W-9) 9th April – More research on things
(W-10) 16th April – More research on things
(W-11) 23rd April – Will be travelling at this time, but can do little things when needed.
(W-12) 30th April – End of document Drafting – most of the documents should be complete by now.
(W-13) 7th May – Final drafts to start.
(W-14) 14th May – Back from Travelling – Knuckle down in final drafts. Start presentation.
(W-15) 21st May – Final documentation and publishing. Finish presentation, Do mock recording etc.
(W-16) 28th May – Due date! Everything should be done and handed in on this day. Everything should be done before this but leaving a week for any catch up.
On each of these days I’ll be writing a blog entry like this one. On the date of final documentation and publishing, I’ll copy all the blog entries into a document for handing in. As it is easy to track things online.
Research Journal
Learning Plan
Peer Reviews
Design Documentation
Technical Design Documentation
System Analysis Plan
Project Management Documentation
Financial Outline
These will need to be submitted both by PDF and a binded document. I’ll need to make sure I have enough printer ink and buy the binder.
Research Journal
For this task, I will need to use WordPress to log each week of what I’ve been researching and which peer reviews, information I can find about the system. At the end of the week I will need to copy them to a word document and spell check them. This will be printed and binded. I will be watching a lot of videos on real special effects like plane crashes and gun shooting to see how fire and smoke acts in real life, and then study images of fire to reproduce it.
Learning Plans
Some ideas of learning objectives could be:
Work out whether saving files to the server or directly to the user’s hard drive is better. Should I use a conbination of both?
Learn how to optomise particle systems on the video card using researches methods
Marching cubes / metaballs for water effects – maybe use normal maps and specular maps to add better lighting
Volumetric for ckloud effectrs
screen space ambient occlusion for smoke
particle light emissions into smoke
volumetric lighting inside smoke cklouds caused by particles
Real life particle movement with wind and the best way to do this
Creating normal maps
Learn what industry people want
Undo and Redo feature using “Memento pattern”.
I might need to elaborate on these more and explore / find new ideas as the documentation develops. Each of these will need to be saved as separate PDFs, and printed along with other material to be binded.
Peer Reviews
I would like to post on as many forums as I can and also ask at least 3 industry special effects designers what they like and dislike about Fume FX and as many opinions as I can get. These can be people on YouTube that post special effects demos and forums. Some things I could ask are:
What are the best features of Fume FX? What are the things that you cannot live without?
Do you find the interface overwhelming?
Are all options used or are there settings that are mostly used and some that should be default anyway?
Would you pay for software like this particle tool idea?
Do you think having it run in a browser would be beneficial? Why / why not?
What are the hardest special effects to create?
What features do you wish were there / were easier?
Would a GUI speed boost make workflow a lot easier?
What sort of interface would work better? Is one like Photoshop a good choice? Would an interface that goes away from the trends of Photoshop be a burden?
The general idea is to prove my niche really. I need to know these things so I can design the interface around these. These will all be recorded and preseted as part of the peer review documentation. I will the end it with a conclusion.
Design Documentation
This will be where the tool is explained in as much detail as possible. Some general things would be things like:
Tool will run in primarily Google Chrome (and Firefox)
Complete functionality. It will:
Be able to save and load on the server
Be able to save and load files locally
be able to create, delete and edit “layers” (smoke on one and fire on another for example)
Export a spritesheet that can be used in the Unity’s particle cool (must do more research about compatability with normal maps and different effects, alpha sorting issues and using multiply and additive together)
Can export different channels, like a normal map, additive and multiply
use the GPU to simulate the system
can create frames
Frames can be scrubbed through
A frame can be chosen with a small box where the section will be exported per frame
The “camera” can be moved around to center the frame
The “frame” will be a square but a “fade out” can be applied in a spherical matter inside the frame, as this is a pre-particle exporter, the effects need to be technically sections of a larger particle system.
Tools in the toolbar will be
Brush to draw particles to the 3d world
A pushing tool to drag particles around like the liquify tool in Photoshop
A gravity tool for pulling particles towards the cursor
Interface will be designed around simply the default styles that three.js comes with.
Different interface view modes:
Game engine preview – will show the rendered version with all effects added using a 3 point light system that can be customised. This will not be rendered, as lighjting in games can change. For example a water particle cannot have a bright light reflection rendered at the top-left because the game’s light might be under it.
Real-time preview (slower to render but will try and make it at most 1 frame per second, each frame can be cached into a buffer on the canvas and displayed until he user moves the camera) This will be the main area that the game will use
Normal mode will show 10% of the particles
Motion vector preview – a normal map looking (x y z velocity represented as red green blue channels) as to visualize movement in a single frame.
Depth buffer view (shows the depth texture, mainly for me debugging but may be useful for others
Particles could light up when hovering over them
The background can be customized to show the transparency in the screen
Screen effects can be applied, like depth of field, screen space ambient occlusion on “smoke” particles (using the depth buffer), glow and
Particles can be simulated using either vertices connecting a vertex colouring additive effect on each particle so make more realistic fire (will go into detail later, and after more research), smoke as volumetric, particle lighting and ambient occlusion (more research needed)
Can add a game light to simulate how it would look in a game
Server will need a database to store user data
To be usable, the system needs to have an undo / redo feature. This should be done using the “Memento pattern” and should be implemented from the start.
A status bar showing the particle count, some extra info and a status of the program (like loading or saving or exporting etc.)
Toolbars, one on the left, the right, the bottom and top. I will try and make them simple without many options as one of the aims of this project is to have a program that’s less daunting and easy to learn.
The name of the program I’m leaning towards something like “Plasmation”. Because the word isn’t really a word, apart from being seen in some online places, which say it’s “forming or moulding” when this tool is like moulding a special effect particle. It also has “plasma” in it, which, to me, sounds special-effect-y.
Technical Design Documentation
This will consist of a lot of UML diagrams basically. Once I’ve sorted out the design documentation, done enough research on the formulas and ways to produce the effects, I’ll be able to do this properly. I think this section will be important to show how the web server and client-side parts will work together, as the barrier here is that because the program is working on a web server, the data needs to be transferred up and down to that and loaded in by users from their hard drives. A list of these would be:
Component listing
Pseudo code for all systems and components
Program design diagrams in UML
Data flow diagrams in UML
The thing to start off with will be the way the server and client could interact, but as I haven’t done the design documentation, this is just an example (Made with Lucidchart, a great example of an online tool): As you can see, this is a basic flow of data in the logging in / creating account / workspace.
System Analysis Plan
In this I will basically talk about why I’ll be using an agile type of development methodology. I will most likely use Scrum because it is what I’m used to. An issue with this is the morning meetings, which I can replace with the journals and discussion of my goals and blocks.
Requirement analysis will also need to be documented which is basically a document that describes the requirements of the program. This will be closely tied to the technical documentation. I will need to include this as a separate documentation with an introduction explaining briefly the purpose of the application, scope, objectives and success criteria of the project, and the usual glossary of terms.
The second part of the documentation will include requirement and the analysis model of the application. The documentation will finish with references.
This section will also include a test plan and test cases for requirements validation.
Project Management Documentation
This will consist of a Work Breakdown Structure (WBS), complete with resource allocations, and a gantt chart with critical path. I will be using gantt chart software for this and will discuss in my journals which software is best for this.
This part of the documentation will also include a risk assessment and contingency plan.
Financial Outline
The application could either be free or a subscription-based program with advanced features unavailable. For this project, however, the program will be very basic, and adding user and subscriptions in there just makes another layer of difficulty. For the actual outline, I will be creating a mock one for industry professionals in a form of a pitch to get funding. The actual money involved will cover things like web servers, domain names, time food and workspace programs for myself to create at home, software, internet and power costs.
My brain (poorly) converted into words while learning game programming