The art of snakebird

We have gotten a bunch of questions about the art in snakebird, so instead of trying to answer them on twitter we’re collecting everything in this article. It should cover everything that we’ve been asked and then some. If you’re unfamiliar with 3D terminology and/or Unity some of the stuff below may be tricky to follow however. Also, for clarification, the game is made in Unity 4.6.x. It’s possible some things has changed or issues been ironed out since.



Deciding on a set of rules that the art must follow can be a good way to make your art style look cohesive. It can actually be very difficult to mix different styles and make them look good together, like having pixel art characters, photos as backgrounds and hand painted objects. The styles very easily clash and work poorly together. To dodge this we lay down a few basic rules for the art to follow early on:

  • Use nothing but flat colors. No shades for shadows or highlights and no gradients.
  • No partial opacity, only 1-bit alpha. This was to further enforce the flat-color rule when using textures with alpha. We obviously didn’t actually use textures with 1-bit alpha anywhere, but an effort was made to make sure any textures with alpha would have a sharp edge and that there were no objects with partial alpha anywhere.
  • Use an orthographic camera to eliminate depth parallax. This was to make the game look like a 2D game (it’s all 3D).

We ended up breaking all rules where it made sense and where we felt we could get away with it.


The levels

Like mentioned above, everything is in full 3D, but using orthographic projection:

Because of this we can use a bunch of tricks to make “dressing” levels with art assets a lot easier. For instance, we use a variety of tiles to shape the brown ground parts, but none of the tiles are unwrapped. Instead they use a shader that automatically maps their texture according to their world space position.

This allows us disregard UV’s entirely and speeds up the placing of tiles considerably as we don’t have to make a million different tile variations. Tiles were generally designed to be snapped into place and not placed freeform in order to follow the gameplay-grid closely. This makes the levels easier to read without the need for a visual grid (though we did put one in there anyway just in case). Doodads were placed freely where it made sense though.

Since we’re using an orthographic camera any parallax we would have got from depth is lost. To counter this we use a script that offsets the position of whatever it’s attached to based on the objects Z position. The further back something is the more it follows the camera. Objects placed in z-0 does not move at all and objects closer to the camera get an inverted offset.

The themes of the various areas was mainly about color picks. A some had their palette before their theme (the ruins with the boxes) while others were theme-ideas we tried to fit in there color-wise (underwater and teleporter levels). Making everything work and be readable with only flat colors was a bit of a challenge. We did not know enough about it when we started and had to figure it out as we went. Palettes went through a lot of revisions and in the end it turned out that the simpler you make it the better it works and it did not have to make sense as long as the palette worked. That’s why there’s teal clouds, baby blue moons and chartreuse stars in there.


The snakebirds


Under the hood the snakebirds consist of three main parts: a head, a body and a tail. The game logic must always have a head and tail piece, but the body can be any number of segments long, even 0. This is why the smallest size of a bird is 2, just a head and tail piece. The body segments and tail also alternate between two colors, which are automatically mapped from a 2×2 texture through code.


For the body segments we ended up using a simple segmented mesh and vertex animations instead of bone animations. We skinned the mesh to a ridiculous amount of joints and then animated them for all 13 required moves a body part could do (various combinations of sliding into/out of cells). We then baked the animation onto the vertices and exported that to Unity as vertex animations using a custom script. The reason we went with vertex animations is that it’s a lot easier to precisely control the shape of the mesh, which is important for the look.

Additionally, when you move a bird the games logic snaps in into its new position and shape instantly. To make the bird look like it slides into its new position all animations starts 1 cell “behind” and then animates into the shape the bird just ended up in.


The face of the bird is simply just a bunch of bone animated parts that sits on top of a regular body segment. We ran into an issue with mixing bone animations and vertex animations where the animator in unity would play one frame behind the vertex animations. To fix this we found that we could go in and force an update on the animator the same frame the vertex animations start.

Eyes use some special magic sauce. Instead of using a black mesh on top of a white mesh for the pupils we use only one mesh and the local position of a joint in a custom shader to move an eye texture around in UV space.

To make the birds react to stuff around them the game looks for relevant game objects around them every time you make a move and plays an animation according to what it finds, if anything. Additionally, birds will prioritize some things over other. Food is more interesting than spikes for example.

When a snakebird is looking at something we weight in an animation layer that only affects the eye-direction joint and manually set a time in an animation that has the eyes looking around in a 360 degree motion. I.e. to make the eyes to look left we set the “eye-look” animation to the time where the pupils are positioned on the left.


The goal

The goal is a combination of a quad with a custom shader and a particle system. The shader takes two textures; a diffuse with a colored spiral and a special texture where all four channels are used as alpha channels.

The R and G channels are added together to create a basic blob shape while the B and A channels are multiplied on top of the blob to deform the silhouette a bit. After that the alpha gets clamped to create a sharp edge. All four channels also continuously rotate and scale in the shader to create motion in the shape. The result is a meta shape-ish blob.


A rainbow texture is then added into the mix. Getting the rotational pivot in the right place for the rainbow was the hardest part of setting up the shader funnily enough. Lastly we added a common particle system that shared the goals rainbow texture and UV.




Performance and optimizations.

Initially we used Unitys Animator component on all the doodads that you can click on and on PC that was fine, but once we started profiling on mobile devices it turned out the Animator is much more expensive in run time than the legacy animation system. The benefits of the animator are obvious, you have a visual and easy way to work with the state machine where you can setup all the transitions and so on. But that performance gain was just too juicy to pass up so we went in and refactored almost everything to use the legacy animation system. This meant we had to redo the state machines in scripts, luckily most of these were very simple and a lot of the clickables used the same template so we could make a fairly generic scripted state machine to cover most of it.

Optimizing with static batching wouldn’t work for our levels since we were faking parallax with scripts, and flagging game objects as static locks the these in worldspace. To bypass this we made “combiner groups” that  would combine all compatible meshes parented under them in the build step with a script. Since we’re using a lot of game objects to build the shapes of the levels the result was a drastic reduction of the amount of draw calls per level.

The unclickable vegetation in the background in the pc version used to consist of two alpha planes on top of each other with a bone animation for movement. For the mobile version this was changed to meshes where we modeled the entire shape and a vertex shader for motion. Getting rid of the alpha reduced a lot of overdraw and since they were not using bone animations anymore we could combine all of them to reduce draw calls. It’s one of those rare cases where it was both cheaper and looked better.

Since most of the art use flat colors we could use very few textures and thus load fewer resources. At least half of the objects in the levels share a white 4×4 texture. To get the color we wanted we just tinted the color in the shader instead. Similar but different; the world map use a “palette” texture for most its objects. This means that a lot of stuff could use the same material and texture. The exceptions are things with scrolling UV’s (water edges and the river) which obviously needs their own material.

world map palette texture


Another (theoretical) loading time optimization we did was to export a lot of meshes as sub meshes. This means that we can load very few meshes to get most of the things we need for each area. All pieces that make up the entirety of the structures of the temple area are technically only two meshes. Loading few things is typically faster than loading many, regardless of their size, though the actual gain from this in the case of snakebird was likely negligible either way.

That should cover everything we’ve been asked and what we could think of!


by / August 16, 2016 / Posted in: Art, Development, Snakebird, Technical