Sunday night we got the theme: “The toys are alive!”. My first thought was a game that involved toys that move only when no one is looking – a kind of creepy thing. I tend to lean heavily towards puzzle games, so I spent time figuring out what puzzles might arise from that. Finally, I have recently played a few indie games that use an isometric grid-based 3d world (Monument Valley and Back to Bed - not a big fan of Back to Bed so far though), so that was going to be my inspiration for the visuals.
I had spent the previous 3 days or so playing around with Unity, so I felt ready to tackle my first Unity project. Some of my goals during the competition were:
- Use this as an opportunity to become really familiar with Unity.
- Make a complete, if short, game. I wanted a complete end-to-end thought-out experience. So I was careful to scope the size of the game so that I could complete it in time.
- I wanted to do something with visual impact. It’s a challenge to do this with my limited artistic abilities.
First, I’ll give an overview of what I spent each of the 7 days on. I feel like I had a pretty successful game jam, so maybe there is some interesting info here?
- Basic environment setup. I made a small test level mesh in Blender, got it into Unity. I found a suitable animated player model and implemented player movement. Just the basics. I also thought about the gameplay mechanics I wanted.
- Basic game mechanics. I got most of the gameplay mechanics working, so I basically had a prototype of what the game would feel like. This ensured me, by the end of day 2, that everything I wanted to do was feasible.
- Demo -> Game. I worked on overall stuff that made it a game and not just a demo: main menu, sequence from one level to another, etc..
- Level design. I started level design – 3 basic “tutorialish” levels – and came up with a workflow to take me from design on paper to a functioning level in Unity
- Level design. My goal for day 5 was to have an essentially complete game that could be released as is. This day was all about level design. I designed 4 more levels, and cut myself off at this point. There were some new mechanics I really wanted to try out, but I forced myself to say “this is complete”. I also came up with an ending to the game, and implemented that as a final “cutscene” level.
- Lighting. I worked on visuals. Essentially I wanted to bake lighting into the game so I could have global illumination with lots of lights, instead of just the harshness of a directional light or two. The visual aspect of the game took a great leap.
- Polish. I worked on minor polish for about 9 hours, after making sure I had functional “backup” builds working.
One notable thing is that there was no “audio” day or “graphics” day or anything (except for the light mapping on day 6, but that was a new thing for me, so I had to spend a lot of time on it). I basically worked on the audio and visual art throughout the 7 days. I like to intersperse this work throughout – switching tasks periodically helps me with motivation. I might be getting frustrated by something, but then I put in some new sounds, or a new model. Suddenly the game looks fresh and better and I get more excited about it!
What went well
The main thing that went well was using the Unity game engine. After spending so long with XNA and MonoGame, it was a pleasure to get things up and running so quickly.
- Models generally imported with no problems (unless the models themselves were corrupted, more on that later), and animations were easy to set up
- Very simple to get robust physics set up
- No audio headaches like I’ve had with XNA and MonoGame
- Everything worked on every platform I tried with pretty much no changes (PC, and the Unity web player on both PC and Mac OS). In contrast, with MonoGame there was a significant amount of fiddling with things like texture formats in order to get stuff working on different platforms. I kept being surprised by how everything “just worked”.
- Unity has some great tutorials available for it, and there is so much support and information out there.
- Player state (which in my game just consists of what level has been completed) is trivial to setup. Like about a minute of work for what would be maybe half an hour in XNA. Little things like this are nice – I noticed how many other game jam entries don’t save player state.
I feel I scheduled my time pretty well. I was careful to not work too hard during the first four or five days to keep my energy up. I got “enough” sleep, but not as much as I wanted. The only time I really questioned whether I would get something done was with the lightmapping on day 6. But it was not essential to the finished product.
I was able to find all the audio clips I needed (and music) from soundsnap.com, and in some cases from stuff I had recorded. For instance, the footstep sounds are part of my personal audio recordings collection. I cleaned up the sounds in Sony Vegas, and then plop them into the game.
One small flaw I see in Unity is the lack of sound design support. I know people often hate on XACT (which is one of the audio solutions for XNA), but it really is nice to be able to assemble sounds in a sound design app and then expose a series of knobs to tweak by the programmer. In Unity everything has to be done in code. One example is playing a random different footstep sound each time. There’s no reason this should need to be done in code – it should be part of the sound definition itself.
Even though I’m no graphic design or artist, I am pleased with the way the game turned out visually. The main parts of the level are just cubes/squares/whatever. I wrote code to give them the proper UV channels (I seem to be forever clueless about UV unwrapping in Blender) and applied a very “spartan” texture in Unity. This, combined with lightmapping support from Unity, gives me a look I’m fairly pleased with. It can be hard for someone like me to come up with a consistent art style, but keeping things simple helps. Given that most of my world was done “programmatically”, I needed few models. I only purchased/acquired 5 of them I think.
I’m pleased with the alternate relaxing/horror atmosphere I gave the game. Sometimes something terrible has apparently happened, but everything seems just hunky dory.
I didn’t think of the game ending until day 5, so I suppose this was kind of fortuitous. But I think it ties it all together nicely. It was kind of random luck that I thought of the “twist”, I suppose.
I’m a fan of making games with minimal UI, and I think the time it saves in tight deadlines like this is definitely an asset. The goal is that players can just figure out the game by playing it, without needing any tutorials.
What went wrong
In terms of game mechanics, I’m not 100% satisfied with what I came up with. I still think there could be some more fun mechanics in there. I feel I may have locked myself into certain gameplay elements too early. They say come up with 5 designs and then throw them all out and come up with another. I didn’t do that – I basically went with the first idea I got. In the end, I feel my game has more of a visual/atmosphere impact than a gameplay impact. Maybe that’s fine, but I still think I could have done better. I mean turning the lights and having the toys only move when it’s dark – it doesn’t really add anything to the gameplay. It just makes it a bit annoying (but it does add to the creepy atmosphere of the game). The other thing I’m not satisfied with is that while solving the puzzles you can get into an impossible-to-win situation, so you need to restart the level. It’s not a huge deal since the levels are small, but something like a rewind functionality could help this (not trivial to implement). When I saw this was frequent enough, I tried to write logic to detect when things have “gone bad” and automatically pops up a “reload level” button. Unfortunately I can’t detect all cases.
I was able to get very little feedback. Near the end when I was closer to shipping, I did get some friends to try it out. At least one of them – once they caught on and got to the later levels – seemed to really enjoy it.
Nearly every model I downloaded from the web (even those I paid for) had problems. Often the transforms for the individual pieces would be wrong (IMG solider), or half the normals would be inside out. I had to fix up most of the models in Blender. Speaking of which,
I know it’s free (so I shouldn’t complain), but I really do have a love/hate relationship with it. The UI just seems to fight against what I want to do. I’m a big fan of standard user interfaces, and Blender just seems so counter-intuitive no matter how familiar I get with it. It’s also remarkably hard to find documentation, despite how widely used it is.
Near the end I started slapping together scripts with more abandon. I suppose this isn’t too big a deal near the end of a gamejam project, but it did result in a some bugs, and some things that ended up being slightly more difficult-to-implement near the end. For example, I think I had 3 separate scripts trying to control the camera, and stomping each other. Part of this just comes from my inexperience with Unity and when to use which way (traverse hierarchy by name, search by tag, provide a gameobject instance to the script in the editor etc…) to access game objects. In the end this wasn’t a huge deal though.
Here I’ll talk about some of the design challenges I faced.
I needed platforms to walk on, walls, and collision boundaries for the level. Fundamentally, everything could be inferred from a simple tile map that contained floors and walls. I wasn’t sure at first how to do that in Unity. I found the documentation for the class that lets you build meshes at runtime. But I wanted my level to be visible in the editor too. There’s probably a way to do this, but I ended up going with a solution where I modeled the level in Blender (I at least know how to extrude cubes, split faces from objects and such).
I only needed very simple texturing on the level geometry – basically, UVs projected along the x, y and z axes. I wasted a bit of time trying to do UV-unwrapping in Blender, but this just seemed over-complicated for what I wanted to do. I contemplated writing a shader in Unity that would calculate UVs based on world position. But I didn’t feel like delving into custom Unity shaders in the middle of a gamejam. In the end I wrote a Blender script that projects UV coordinates onto objects based on the face normals.
Script I wrote in Blender to calculate projected UV coordinates for the x, y and z axes:
import bpy me = bpy.context.object.data uv_layer = me.uv_layers.active.data vertices = me.vertices textureScale = 0.5 textureOffset = 0.5 for poly in me.polygons: print("Polygon index: %d, length: %d" % (poly.index, poly.loop_total)) # range is used here to show how the polygons reference loops, # for convenience 'poly.loop_indices' can be used instead. for loop_index in range(poly.loop_start, poly.loop_start + poly.loop_total): print(" Vertex: %d" % me.loops[loop_index].vertex_index) vIndex = me.loops[loop_index].vertex_index theNormal = vertices[vIndex].normal pos = vertices[vIndex].co print(" Pos: %r" % pos) print(" norm: %r" % theNormal) uvProj = (0,0) #figure out which direction the normal faces. if theNormal.x > theNormal.y: if theNormal.z > theNormal.x: # faces z direction uvProj = (pos.x * textureScale + textureOffset, pos.y * textureScale + textureOffset) else: # faces x direction uvProj = (pos.y * textureScale + textureOffset, pos.z * textureScale + textureOffset) else: if theNormal.z > theNormal.y: # faces z direction uvProj = (pos.x * textureScale + textureOffset, pos.y * textureScale + textureOffset) else: # faces y direction uvProj = (pos.x * textureScale + textureOffset, pos.z * textureScale + textureOffset) print("Old UV: %r" % uv_layer[loop_index].uv) uv_layer[loop_index].uv = uvProj print("New UV: %r" % uv_layer[loop_index].uv)
Movement and physics
My game design presented a challenge because I wanted both a fully 3d physics-based world, but also wanted discrete tile-based movement for the toys in the game. I debated whether or not to have some grid-based store of the current game state (enforcing one game entity at each square, for instance). I’m not a huge fan of duplicated state though, so I decided to just leverage the physics system. So when doing my discrete movement I decided to just perform raycasts one world unit in the appropriate direction and see if anything was it. This does have the advantage that things like opened/closed door blocking things “just works”. However, I did encounter issues with ray casts missing things, and casting from the wrong position (for some models, their transform position lay under the platform).
I ended up having to use sphere casts instead of raycasts, and being very careful to adjust the start point of the cast to an appropriate level about the ground. Overall it’s not a very robust system, so I would probably change it if I continue development on this game.
When the lights are off, you can hear the bear moving. Some people don’t play with audio, or can’t hear it. In that case I wanted some visual indicator that corresponded to the bears’ movement, so that the player doesn’t need to guess how far they’ve moved. I had a lot of trouble coming up with a good design for this. At first I had dots across the bottom of the screen, but that ruined the immersive experience I thought. In the end I have a subtle spotlight on the player that rotates. One rotation is one bear step. It’s not very obvious though. In the updated version the spotlight pulses in addition to rotating. That’s a bit more obvious but it’s still not ideal.
Things I wanted to get done and bugs
Some additional mechanics I wanted to try were:
- The player can point a light beam or move some glowing object around. It would stay lit when the lights go off, and would prevent bears (or whatever) from moving across its path. In the end I decided that I already had something that prevented bears from moving across an area (the soldiers), so I decided the extra implementation time wasn’t worth the possible extra puzzles
- A bear or toy that moves towards the player when the lights are off, and kills him if it reaches him.
I would also like to perhaps have something else happening while the lights are off (i.e. instead of the player just waiting).
The final submission for the competition had a few bugs. One made it possible to not be able to turn the light switch on again (the player movement is frozen when the lights are off). I couldn’t reproduce it, but it happened to a friend a few times. The light switch trigger depended on the position and direction of the player, and even though I stopped the player in his tracks when the light switch went off, I think due to update order there is a possibility that the player moves out of position the next frame. This wasn’t a robust way to do things, and I’ve fixed it in the updated version. Another bug was the lighting on the bear. I noticed in certain areas it was just not lit correctly. Have a look: The lighting just seems flat and out of whack with the surroundings. The static objects are lightmapped, which means I’ve baked the global illumination into lightmap textures for the objects. As a result, no dynamic lighting is applied. This means for dynamic objects, such as the player or bears, we need to use light probes. I knew something was up, but I realize I could look at the light probe cells used by an object until after the compo was over. When I did, for a bear it looked like this:
The light probes used were scattered across the level, when they should have been the ones closest to the bear. It took me a while to figure out why. Only the center of the object’s bounding volume is used to determine which light probes are used, not the size. In this case though, that center turned out to be well below the bear, outside of the level volume covered by light probes. I’m not sure exactly why Unity chose those probes, but adjusting the Light Probe Anchor gave me this instead: And now the bears look much better: Another thing I noticed is that the lighting on my doors looked bad. There isn’t really a way around this with Unity’s current system though. Only a single interpolated light probe value is chosen for the entire mesh, so a flat mesh (like the door) will have the same lighting through out. I could have baked the door into the lightmaps, but then it would have looked incorrect when it was opened. The color of the door is an important gameplay element, so I suppose it’s ok that the lighting isn’t too interesting on it.
I’m not sure if I’ll continue work on this game – I suppose it might depend on how well it does in the competition. Are the mechanics strong enough to make an interesting game? What other things can I do to make the game interesting? A deeper story? More of an exploratory feel to the world? I know if I continued I’d want to streamline the level generation process (basically have more geometry generated automatically from a simple text file or something). Getting familiar with Unity was definitely a good move career-wise. From a programming perspective though, things are not very challenging. Gameplay programming barely feels like programming. I may look into shader programming in Unity next, because that interests me.