I recently had the privilege of working on the interactives in a new office in Boston. I'll go into more detail in another post about the individual interactives but wanted to take a deeper dive into how one of the unifying elements worked. We typically are tasked with making several interactives for a single client feel cohesive both visually and functionally, a task that can sometimes be difficult based on different client's needs. A designer showed me a proof of what looked like an LED array with no motion study; a vision of how to get there popped into my head.
At the time I was working primarily on corporate projects, and didn't usually get to have a lot of fun with graphics so I took the chance and ran with it. I wanted to make a background animation that not only looked great but also responded to a user's touch in order to make them feel more connected to the app. Small moments of wonder like these are what interest me about programming: being able to conjure something unexpected and create a unique experience.
Cellular Automata
Strawberry Cubes by Loren Schmidt
I knew I wanted to use Cellular Automata to make the animation and come up with a set of rules that felt alive so that the user feels like they influence it yet never have total control. Cellular automaton are used in a variety of different games and visual effects and are a great example of simple rules producing complex results. Cellular automaton use a simulation world of a grid of values that can be either 0 or 1: 0 is "dead" and 1 is "alive." Each step of the simulation you check the squares adjacent to a given square and add up the "alive" squares. Depending on the rules you select for your simulation, you then set the current square as "alive" or "dead." These rules create different effects where groups of living squares seem to travel across the world, and patterns emerge. The most famous of these is called Conway's Game of Life.
Checking every surrounding square is expensive and for a lot of pixels in an image it adds up quickly. Because of this, and because of a general curiousity about GPGPU, I wanted to implement my automaton on the GPU which meant I would have to do a technique called FBO Ping Pong. I won't detail too much of the setup here because others have gone into that in more detail (LINK UNLEASH YOUR GPU HERE), but know that it involves saving your information to textures on the GPU rather than arrays on the CPU, so the information doesn't have to be sent over the pipeline every frame.
I'm only going to touch on GLSL code here to keep this brief, but if this piques your curiousity feel free to give the example three.js code a peek by inspecting each element!
Our Ping-Pong, GPGPU shader
Sampling: get texel size
Since we're reading information that's stored in a texture, we need to make sure we're reading the right bin. Think of it like a two-dimensional array that uses a float for access. Some of those floating point values will land between the array bins and be interpolated between what we need. So we need to set up our textures to use nearest neighbor interpolation for hard edges between our pixels. Next, we need to determine what floating point value we need to access each grid. We do this by finding (1.0 / width, 1.0 / height) * (x,y).
vec2 st = vec2(texCoord.x,texCoord.y); vec2 texel = vec2(1.0/ res.x, 1.0 / res.y);
So now that we have all of our positioning figured out we need to actually sample our image. We're only worried about one channel of color because we're processing this as numerical data. Use the appropriate sampler for your version of GLSL to grab our surrounding samples. If we're on an edge we just grab from the other side of the texture because it's set to wrap.
//See if this square is live bool live = ((texture2D( tex0, vec2(st.x, st.y ) > thresh); //Sample a neighbor directly above us cur = (texture2D( tex0, vec2(st.x + texel.x * 0.0, st.y + texel.y * -1.0) ).r);
Rule Following
Here's where we get to start being creative! Every cellular automaton has "rules" that it follows, these rules look at the current game state and decide how to set the next state based on how many cells are alive or dead around the currently assessed cell. Small changes to rules can have drastic consequences, and this part of the shader easily took as long if not longer than the rest of the shader due to finding appropriate values.
float onval = max((accum/(float(neighbors)+0.5)), 0.5); float drawLive = 0.0; if (live) { if (neighbors < 2 || (neighbors > 4 && neighbors != 8)) { drawLive = 0.0; } else { drawLive = onval; } } else { // The cell is dead: make it live if necessary if ( neighbors == 3 ) { drawLive = onval; } }
Noise
At this point we leave the realm of CA purists, we no longer base all of our decisions off of whether or not we've got friends around our little square. I added noise to the simulation to break up instances where a pattern would remain largely static. If the perlin noise at a given location is below a certain value we set it to off, meaning that "safe" squares can now suddenly die, sending ripples throughout the game board.
//quakulate some noise to use for random values float noise_v = noise(vec3(st.x * 259.2349, st.y * 259.2349,time)); //todo: Refactor to not have if statements in shader, ya dummy! float onval = max((accum/(float(neighbors)+0.5)), 0.5); float drawLive = 0.0; if (live) { if (noise_v > 0.7 || neighbors < 2 || (neighbors > 4 && neighbors != 8)) { drawLive = 0.0; } else { drawLive = onval; } } else { // The cell is dead: make it live if necessary if ( neighbors == 3 || noise_v>0.8 ) { drawLive = onval; } }
Decay
The goal for the visuals was to look like LEDS, objects in the physical world. We also want our background to blend into, well the background. Both of these things require subtlety and currently our simulation does not have that. For that reason I added a slow decay to the values in the simulation, the universe trends towards equilibrium.
Output
So we've figured all that out, but what the heck do we do with it? You can't write to a texture while you're reading it, you'd run into a race condition where values were reading from different steps. Because of this we bind our input texture before we start drawing, set our output as the viewport, and then swap textures at the start of each frame. All we have to do is set the fragment color with the value output from our simulation.
#ifdef GL_ES gl_FragColor = vec4( vec3(drawLive) , 1.0);; #else oColor = vec4( vec3(drawLive) , 1.0); #endif
At this point our Automaton is looking pretty good! I'll come back at the end to add our beloved interaction, but now let's move on to our output shader!
Making our output fragment shader
Draw a screen-sized circle
Let's take a break from sampling real fast to talk about something really cool. You might know that you can pass geometry to the graphics card that is build on the CPU, but did you also know that you can describe shapes to be drawn entirely in the fragment shader? That's how graphics demos are made. We're going to do a basic form of this, with a distance function. The simplest shape to draw in this way is a circle, which coincidentally is the shape we need for our LED matrix. Simply put we check all our pixels in our viewport and if they're further away than our radius we color them black, otherwise we color them white. This sort of thing is powerful, you can draw lots of things just by using distance functions.
float circle(in vec2 _st, in float _radius){ vec2 l = _st-vec2(0.5); float f = 1.0-smoothstep(_radius-(_radius*0.01),_radius+(_radius*0.01),dot(l,l)*4.0); return f; } void main (void) { vec2 st = texCoord.xy * vec2(1.0,1); float cv = (circle(st), 0.1)); vec3 color = vec3(cv); #ifdef GL_ES gl_FragColor = vec4( color, 1.0); #else oColor = vec4( color, 1.0); #endif }
Draw Many Circles!
So we've got a circle, cool, I thought we were doing a lot of those though? We need to multiply our current value by the number of circles in each row we want, and then take the fractional value of that number. We've just created a thousand little universes which all draw their own circle as they all have their own "0->1". But hold up! These don't look very good on account of being so small their harsh all-or-nothing coloring method creates jagged edges. We add some basic anti-aliasing by sampling the circle from three points that are very close together and then averaging the result. This gives us a nice fresnel on our edges.
Read from the appropriate cell in the ping pong texture, and lerp between them
So remember that fractional value? Now we need to do the opposite, and get the whole number rounding down from that value. We then take that number and get the texel for that value. This lets us access our data textures, as we're now in the same numerical space as they are. We're now going to read from both steps at the same time. Remember, because we've just computed our step we're looking at both the previous step and the current step. We pass in a uniform to represent the amount to fade between them.
vec3 val = texture2D(targetValueTex, texCoord).rgb; vec3 last_val = texture2D(lastValueTex, texCoord).rgb; val = mix(val,last_val,fade_amnt*fade_amnt); float cv = (circle(fract(st*20.0), 0.1)); float brightness = ( cv * (val.r*0.8 + 0.2)); vec3 color = vec3(brightness);
Next Sections coming soon
Images and Color
Outside of our shader we can add and overlay images, in our final app this is controlled outside of the shader because not all applications choreograph their images the same.
Coming soon: Touch Integration
So we're almost done and need one little finishing touch. Remember how I was going on about the user being able to interact with our background? We're going to want to add that in now. Because our data is represented as an image we can do this easily using whatever drawing functions are in the framework we're using. We get the pixel location of our finger (or mouse) in the image, and then draw an ellipse of appropriate size to that location colored white.