1 Comment

Decompiling SCI – part 2 – control flow

Here’s a link to part 1 of this series.

Now we have a directed graph that consists of nodes of branch-less instruction sequences in a function (since the branches became the edges of the graph). This is what is needed to perform control flow analysis.

Resources

Here are some resources (generally white papers, course notes or wikipedia pages) that were useful for implementing control flow analysis.

 

Overview

Given the “raw” graph of instruction sequences, we proceed to reduce this graph by iteratively incorporating more semantic meaning to parts of it.

For instance, if we recognize a group of nodes in the graph as a loop, we can replace those nodes with a single loop node that contains information indicating which child nodes represent the head, latch and looping condition of the loop.

The basic control structures we look for are: loops, switch statements, compound conditions and if statements.

When detecting a particular type of control structure, we generally proceed from innermost structures to outermost. There are some possible failure points here, for instance when two control structures overlap and one is not completely contained within another. In that case we have no way to reduce the control flow graph, and we fail the decompilation. However, as far as I know this never occurs in actual game code.

Loops

The first control flow structures we look for are loops. To do so, we must calculate the dominators for of each node of the graph. Node A dominates another node B when all paths to node B must first pass through node A. Node A is an immediate dominator of another node B when node A dominates node B, but does not dominate any other node that dominates node B. These concepts make formal reasoning over the control flow graph more straightforward.

We use an algorithm by Tarjan and Lengauer to calculate the dominators (a link to a paper on this is listed in the Resources above).

With the dominator information, we can identify back edges in the graph (these are the edges where the tail of the loop jumps back to the beginning of the loop). This happens when a node (which ends up being the head of the loop) dominates its predecessor (which will end up being the tail of the loop).

Consider the following method (participle::doVerb from King’s Quest 6):

	(method (doVerb theVerb &tmp temp0 temp1)
		(super doVerb: theVerb &rest)
		(if (not (proc999_5 theVerb 1 94 28 13 12))
			(= temp0 0)
			(while (< temp0 20)
				(= temp1 0)
				(while (< temp1 7000)
					(++ temp1)
				)
				(DrawCel
					970
					5
					(Random 0 (- (NumCels self) 1))
					nsLeft
					nsTop
					15
				)
				(++ temp0)
			)
		)
		(DrawCel 970 5 0 nsLeft nsTop 15)
	)

Here’s the control flow graph we generate for it (I’ve replaced the assembly instructions with the final decompiled constructs for clarity):

 

Back edges are highlighted in red. When you have a node A that has a predecessor B, and all paths to that predecessor B must go through node A, then this is a loop back edge. The back edge goes from the loop latch to the loop header.

Back edges are highlighted in red, and the resulting loops are highlighted in blue and cyan (one contained within the other). When you have a node A that has a predecessor B, and all paths to that predecessor B must go through node A, then this is a loop back edge. The back edge goes from the loop latch to the loop header. We can then follow predecessor paths from the latch back to the head in order to determine which nodes are children of the loop structure.

 

Once we’ve identified a back edge, we then need to find all the child nodes of the loop (the head, latch, and the other nodes that comprise the body of the loop). We can follow the tail node back to the head node to collect all these children.

There is one additional issue here though. Loops can contain break statements. In our situation, these will be nodes whose successors are outside the loop. So following the tail node back to the head node will not pick up these nodes. Instead, we determine the follow node of the loop. This is the node just outside the loop which follows the loop. It’s where breaks will jump to. This isn’t trivial, and this is one of the places where we currently break from graph theory a bit and take advantage of the fact that SCI byte code always proceeds forward in terms of memory addresses. This is also an occasional failure point of the current decompiler.

Once the follow node for the loop has been determined, we can then follow its predecessors, and include any that are dominated by the loop header.

Switch statements

The next control structures we attempt to find are switch statements. Switch statements have a distinct instruction pattern in SCI.

A copy of the value of the switch expression is kept on the stack, and this value is duplicated onto the stack again (using the DUP opcode) for each case. Finally, the TOSS opcode is issued to pop the value from the stack at the exit of the switch statement.

The TOSS opcode is the most distinctive thing here – switch statements are the only place this is used in SCI. So the first step is to find a node that starts with a TOSS.

Consider the following method:

	(method (edgeToRoom theEdgeHit)
		(switch theEdgeHit
			(EDGE_TOP north)
			(EDGE_RIGHT east)
			(EDGE_BOTTOM south)
			(EDGE_LEFT west)
		)
	)

Which is represented by the following disassembly:

    (method (edgeToRoom) // method_0f05
  0f05:8f 01              lsp param1      // the switch value
  0f07:3c                 dup             // we dupe it on the stack for each case
  0f08:35 01              ldi 1           // the case value
  0f0a:1a                 eq? 
  0f0b:31 04              bnt code_0f11   // branch to next case
  0f0d:63 2a             pToa north 
  0f0f:33 1c              jmp code_0f2d   // jump to switch exit

        code_0f11
  0f11:3c                 dup             // duping the switch value for the next case
  0f12:35 02              ldi 2           // the case value
  0f14:1a                 eq? 
  0f15:31 04              bnt code_0f1b   // branch to next case, and so on...
  0f17:63 2c             pToa east 
  0f19:33 12              jmp code_0f2d 

        code_0f1b
  0f1b:3c                 dup 
  0f1c:35 03              ldi 3 
  0f1e:1a                 eq? 
  0f1f:31 04              bnt code_0f25 
  0f21:63 2e             pToa south 
  0f23:33 08              jmp code_0f2d 

        code_0f25
  0f25:3c                 dup 
  0f26:35 04              ldi 4 
  0f28:1a                 eq? 
  0f29:31 02              bnt code_0f2d 
  0f2b:63 30             pToa west 

        code_0f2d
  0f2d:3a                toss             // Throw away the switch value on the stack
  0f2e:48                 ret             // since it's no longer needed.
    )

 

This is represented by the following control flow graph:

 

SwitchRaw

 

When we find the node that starts with a TOSS, we can enumerate all its predecessors until we find one that dominates it (i.e. all paths that reach the TOSS node go through that node). The predecessor of that node will be the head node of the switch statement.

Once we have the collection of nodes that comprise the switch statement, we can then proceed to determine the case nodes. We assume the cases follow a pattern like so:

  • a DUP instruction
  • [some expression that puts a value into the accumulator]
  • an EQ? instruction (comparing the switch value to the case expression)
  • a BNT instruction to the next case

Proceeding forward from the switch head, and following the “else” paths of each branch node, we can determine the beginning nodes of each case statement. To find the ends, we can look for predecessors of the TOSS node that are dominated by the case heads.

The final node structure for the function above might look like this:

 

SwitchFinal

 

Compound conditions

Next, we look for patterns that represent compound conditions (e.g. if (A and B)). In the Cifuentes paper, it is shown that shortcircuited compound conditions can be reduced to one of four different patterns that represent the following cases:

  • A or B
  • A and B
  • (not A) or B
  • (not A) and B

In assembly, these would correspond to the 4 variations of the following pattern:

		expression A
		bt/bnt then/else
		expression B
		bnt else	

The Cifuentes paper uses the following diagram to list these out and show the 4 branching possibilities:

 

CC

 

More complex boolean expressions end up just being nested compound conditions, so we repeat this pattern-finding iteratively (each time grouping the relevant nodes under a “compound condition” semantic node) until no more compound conditions are found.

Some fixup

As this point, we do some fixing up of our control flow graph. Of concern are the patterns for break and continue statements in loops. These break the natural flow of the program and will interfere with our if statement detection.

Fundamentally, an if statement pattern is defined by a header node with two branches, where each branch has a common follow node that is dominated by the header node (i.e., all paths to the follow node pass through the header node). The basic control flow for if-then and if-then-else are shown below:

 

IfStatementTypes

 

If the if-statement body contains a break or a continue, then the recognizable pattern won’t be preserved. Consider the following loop:

	(while temp0
		(Prints "A")
		(-- temp0)
		(if (== temp0 4)
			(break)
		)
		(Prints "B")
	)

 

Though it contains an if-statement, we don’t see the if-statement pattern we want in the control flow graph that we’ve constructed for its disassembly:

 

One branch of the if-break node leads to the loop exit. The if-statement doesn't have an easily recognizable pattern.

One branch of the if-break node leads to the loop exit. The if-statement doesn’t have an easily recognizable pattern.

 

So we look for jumps to the loop exits (or to the loop header), and replace them with “semantic” break nodes (or continue nodes) and make the subsequent code be their successor node:

 

A semantic break node has been inserted, and we now have a recognizable if-then pattern.

A semantic break node has been inserted, and we now have a recognizable if-then pattern (if you’ll pardon the .svg renderer shifting the graph nodes around a bit).

 

 

If statements

If-statements are the last control structures we determine. We do them last because other control structures like loops and switch statements might have patterns within them that we would incorrectly detect as if-statements. By doing them last, we ensure that we correctly identify those other control structures.

The if-then and if-then-else patterns were described in the previous section. With those recognized, the previous example becomes something like this:

The If statement has been recognized and substituted in the loop body.

The If statement has been recognized and substituted in the loop body.

 

Summary…

After this major step of control flow analysis is done, we end up with a much more semantically meaningful graph of the function. It consists of the following types of nodes:

  • raw code (sequences of instructions)
  • compound conditions
  • break/continue nodes
  • loops
  • switches/cases
  • ifs

During processing, each of these nodes has had some of its child nodes semantically tagged. For instance, compound conditions have one of their children marked as “first”, and the other as “second”. Ifs have one of their children tagged as “then”, and possibly another as “else” (to indicate which nodes begin the bodies of the then and else clauses), and another marked to indicate it is the condition node.

At this point we have a good idea what the final structure of the function source code will look like, but we’re still far from done.

The control flow graph for the makeARock method on the CliffRoom object in King Quest 6's script 21.

The complete control flow graph for the makeARock method on the CliffRoom object in King Quest 6’s script 21.

 

The above graph ended up getting decompiled into the following source code:

(method (makeARock)
	(if (== rockCount maxRocks)
		(if (== gArrayScriptGetValue 300)
			(gKq6Messager say: 8 5 18 2 0 21)
		)
		((global2 script?) cue:)
	else
		(ShakeScreen 1 (Random 0 2))
		(global104 number: 300 setLoop: 1 play:)
		(if (> rockCount 0)
			([newRockStep (- rockCount 1)] stopUpd:)
		)
		((= [newRockStep rockCount] (RockStep new:))
			x:
				(cond 
					((== gArrayScriptGetValue 300) [local278 rockCount])
					(flipRocks (- 320 [local246 rockCount]))
					(else [local246 rockCount])
				)
			y:
				(if (== gArrayScriptGetValue 300)
					[local287 rockCount]
				else
					[local262 rockCount]
				)
			cel:
				(if
					(or
						(== global12 130)
						(== global12 340)
						(== global12 370)
					)
					2
				else
					0
				)
			rockPointer: rockCount
			corner:
				(cond 
					(
					(and (== rockCount 3) (== gArrayScriptGetValue 300)) 1)
					(
						(and
							(or (== rockCount 6) (== rockCount 11))
							(== gArrayScriptGetValue 320)
						)
						1
					)
					(else 0)
				)
			capStone:
				(if
					(or
						(and (== rockCount 14) (== gArrayScriptGetValue 320))
						(and (== rockCount 7) (== gArrayScriptGetValue 300))
					)
					1
				else
					0
				)
			init:
			setCycle: End RockStep
		)
		(self rockCount: (+ (self rockCount?) 1))
	)
)

The next step we’ll look at in the decompilation process is Instruction Consumption.

Advertisements
3 Comments

Decompiling SCI byte code – part 1

SCI is the Sierra Creative Interpreter – the scripting language and runtime on which Sierra’s adventure games were built between roughly 1988 and 1996.

As part of the SCI Companion project (an IDE for SCI) I worked on over the past year, I wrote a decompiler. In this series of posts I’ll go over the process of generating source code from compiled byte code, and the technical challenges and limitations of doing so.

A good reference for SCI byte code is this page in the ScummVM wiki.

This post is part 1 of the series. Here are links to the rest:

Decompiler

In contrast to a disassembler, implementing a decompiler is quite a task. A disassembler is basically just assembly language with symbol lookups. That is, a direct listing of the byte code but with textual opcodes and the operand numbers replaced with variable names, literals or object names where possible.

The raw byte code for this method.

The raw byte code for the Head::showSelf method in Space Quest 1 VGA

 

HeadShowSelf

Disassembly from SV (Sierra Viewer) for the Head::showSelf method in Space Quest 1 VGA. With some patience it’s possible to understand what this method is doing, but it’s very slow-going.

 

A decompiler, on the other hand, attempts to recover the higher level code structure: if statements, loops, switch statements and so on. In addition, some work is done to generate meaningful variable names where possible.

 

SourceCode

SCI Companion’s decompiled source code generated for the Head::showSelf method.

 

Compared to a more generic x86 decompiler, say, decompiling SCI byte code is easier. SCI byte code and script resources actually have a decent amount of “original information” in them. It is probably comparable to that of CIL (what C# and other .net languages compile to), and certainly much more than compiled native x86 code, for instance.

We can retrieve the following information from the compiled script resources:

  • class and instance names
  • method names (both for calls and implementations)
  • property names

Some things whose original names are not retrievable are:

  • temporary variables, script and global variables, function parameters
  • procedures (i.e. functions that exist outside an object)

For many of these items we can make reasonable guesses though.

SCI byte code also has the advantage of being generated by a single compiler (at least for the original Sierra games – not for fan games). This means patterns of instructions representing a particular higher level construct will generally be similar. For instance, there is a very distinct pattern for switch statements.

Start to finish

The starting point is the SCI script resources. These contain code logic, class and instance definitions, strings, and script variables. Different versions of SCI use different formats for script resources, but my resource loader loads each into a single runtime format so it can be interacted with in a consistent way.

The ending point is an abstract syntax tree describing the entire script file. This is a tree of code constructs like if statements, loops, and function calls. From here, I can run the AST through a syntax formatter that outputs the source code in one of two (currently) source code languages (I also had a c++ style source code I was working on for a while, but I abandoned that).

 

Determining code boundaries

The first step is to determine the boundaries of a function. SCI script resources can have public procedures and objects with methods. In each of these cases, an offset to the beginning of the method is provided. There is no end point or code size given, other than the total length of the entire script resource. We can try to determine end points by keeping track of where other chunks of data in the script resource begin, but that method isn’t always reliable.

Instead, to determine where a function ends, we can look for a RET opcode (return), which returns control to the calling function. However, RET can appear in the middle of a function too. So we actually need to look at all the branching instructions to keep track of the “furthest” point to which we have branched. If we encounter a RET instruction that is after any branch target, then we know we’ve reached the end of the function. Some corrupt scripts don’t satisfy this condition, and so this is a possible point of failure for decompilation.

At this point, it might be worth pointing out that SCI’s limited scope and having knowledge of the compiler originally used helps us out. For instance, a generic x86 decompiler would be a much more difficult task, since function code may not be laid out in sequence in memory. However, for SCI we know this to be true. We’ll never encounter a function that jumps to another arbitrary place in memory, so we can always assume that if there are no branch targets beyond a RET instruction we’ve encountered, that it must be the end of the function.

 

Determining the boundary for the Collect::eachElementDo method. There's a RET in the middle of the function, but we know there is a branch instruction that points after the RET - so the first RET doesn't signal the end of the function.

Determining the boundary for the Collect::eachElementDo method. There’s a RET in the middle of the function, but we know there is a branch instruction that points after the RET – so the first RET doesn’t signal the end of the function.

 

One final challenge here is that SCI also supports local procedures in the script resources. For these (unlike public procedures and methods), we have no entry point listed anywhere in the script resource. Instead, we need to scan all bytecode for CALL opcodes, and put together a list of the call target addresses. These would be our entry points for local procedures.

After all this work is done, we have a set of start/end points for the code segments (functions) that we need to decompile.

Partitioning code into blocks

In order to perform control flow analysis (the most difficult step of decompilation), we need to distill the instructions down to some fundamental nodes that we can use in our analysis.

The interesting boundaries for control flow are the branch instructions (bnt, bt and jmp) within the code segments. Boundaries are also defined by the targets of branch instructions.

Let’s take a look at this piece of script:

;;; Sierra Script 1.0 - (do not remove this comment)
(script# 19)

(use Main)
(use Print)

(public
	SomeProc 0
)

(procedure (SomeProc &tmp someVar anotherVar)
	(= someVar 11)
	(= anotherVar 0)
	(if (== gRoomNumber 13)
		(Prints "Blah")
		(while (> someVar 0)
			(Printf "Count: %d" someVar)
			(if (== 0 (mod anotherVar 2))
				(++ anotherVar)
			)
			(-- someVar)
		)
	)
)

 

Here’s the assembly that would correspond to this script, annotated to indicate where the boundaries are between sequences of instructions that comprise “nodes” that are interesting for control flow.

/ EXPORTED procedure #0 (SomeProc)
(procedure proc_000a
  000a:3f 02             link 2 // (var $2)
  000c:35 0b              ldi b 
  000e:a5 00              sat temp0 
  0010:35 00              ldi 0 
  0012:a5 01              sat temp1 
  0014:89 0b              lsg gRoomNumber 
  0016:35 0d              ldi d 
  0018:1a                 eq? 
  0019:31 2c              bnt code_0047 
-------------- BOUNDARY
  001b:78               push1 
  001c:74 0006          lofss $0006 // Blah
  001f:46 0399 0000 02  calle 399 procedure_0000 2 // Prints 
-------------- BOUNDARY
        code_0025
  0025:8d 00              lst temp0 
  0027:35 00              ldi 0 
  0029:1e                 gt? 
  002a:31 1b              bnt code_0047 
-------------- BOUNDARY
  002c:7a               push2 
  002d:74 000b          lofss $000b // Count: %d
  0030:8d 00              lst temp0 
  0032:46 0399 0001 04  calle 399 procedure_0001 4 // Printf 

  0038:76               push0 
  0039:8d 01              lst temp1 
  003b:35 02              ldi 2 
  003d:0a                 mod 
  003e:1a                 eq? 
  003f:31 02              bnt code_0043 
-------------- BOUNDARY
  0041:c5 01              +at temp1 
-------------- BOUNDARY
        code_0043
  0043:e5 00              -at temp0 
  0045:33 de              jmp code_0025 
-------------- BOUNDARY
        code_0047
  0047:48                 ret 
)

 

This is easier to visualize in graph form. The nodes in the graph are marked by their address in the script resource, followed by the list of assembly instructions that are part of that node:

 

RawControlFlowGraph

 

 

You’ll note that we have created one extra node in this graph: the beginning of the set of instructions that lead up to the first if statement have also been marked as a boundary. This is a sort of tweak that will make it easier to distinguish if statements with compound conditions vs nested if statements that have code between the if statements that isn’t part of the 2nd if’s condition.

These graphs form the basis of the next step, control flow analysis.

 

Leave a comment

Week Of Awesome – port mortem for “When It’s Dark”

Click here to play the game in the Unity web player

Sunday night we got the theme: “The toys are alive!”. My first thought was a game that involved toys that move only when no one is looking – a kind of creepy thing. I tend to lean heavily towards puzzle games, so I spent time figuring out what puzzles might arise from that. Finally, I have recently played a few indie games that use an isometric grid-based 3d world (Monument Valley and Back to Bed – not a big fan of Back to Bed so far though), so that was going to be my inspiration for the visuals.

Working with light probes in Unity

Working with light probes in Unity

I had spent the previous 3 days or so playing around with Unity, so I felt ready to tackle my first Unity project. Some of my goals during the competition were:

  • Use this as an opportunity to become really familiar with Unity.
  • Make a complete, if short, game. I wanted a complete end-to-end thought-out experience. So I was careful to scope the size of the game so that I could complete it in time.
  • I wanted to do something with visual impact. It’s a challenge to do this with my limited artistic abilities.

Timeline

First, I’ll give an overview of what I spent each of the 7 days on. I feel like I had a pretty successful game jam, so maybe there is some interesting info here?

  1. Basic environment setup. I made a small test level mesh in Blender, got it into Unity. I found a suitable animated player model and implemented player movement. Just the basics. I also thought about the gameplay mechanics I wanted.
  2. Basic game mechanics. I got most of the gameplay mechanics working, so I basically had a prototype of what the game would feel like. This ensured me, by the end of day 2, that everything I wanted to do was feasible.
  3. Demo -> Game. I worked on overall stuff that made it a game and not just a demo: main menu, sequence from one level to another, etc..
  4. Level design. I started level design – 3 basic “tutorialish” levels – and came up with a workflow to take me from design on paper to a functioning level in Unity
  5. Level design. My goal for day 5 was to have an essentially complete game that could be released as is. This day was all about level design. I designed 4 more levels, and cut myself off at this point. There were some new mechanics I really wanted to try out, but I forced myself to say “this is complete”. I also came up with an ending to the game, and implemented that as a final “cutscene” level.
  6. Lighting. I worked on visuals. Essentially I wanted to bake lighting into the game so I could have global illumination with lots of lights, instead of just the harshness of a directional light or two. The visual aspect of the game took a great leap.
  7. Polish. I worked on minor polish for about 9 hours, after making sure I had functional “backup” builds working.

One notable thing is that there was no “audio” day or “graphics” day or anything (except for the light mapping on day 6, but that was a new thing for me, so I had to spend a lot of time on it). I basically worked on the audio and visual art throughout the 7 days. I like to intersperse this work throughout – switching tasks periodically helps me with motivation. I might be getting frustrated by something, but then I put in some new sounds, or a new model. Suddenly the game looks fresh and better and I get more excited about it!

Gameplay brainstorming from early on

Gameplay brainstorming from early on

What went well

Unity

The main thing that went well was using the Unity game engine. After spending so long with XNA and MonoGame, it was a pleasure to get things up and running so quickly.

  • Models generally imported with no problems (unless the models themselves were corrupted, more on that later), and animations were easy to set up
  • Very simple to get robust physics set up
  • No audio headaches like I’ve had with XNA and MonoGame
  • Everything worked on every platform I tried with pretty much no changes (PC, and the Unity web player on both PC and Mac OS). In contrast, with MonoGame there was a significant amount of fiddling with things like texture formats in order to get stuff working on different platforms. I kept being surprised by how everything “just worked”.
  • Unity has some great tutorials available for it, and there is so much support and information out there.
  • Player state (which in my game just consists of what level has been completed) is trivial to setup. Like about a minute of work for what would be maybe half an hour in XNA. Little things like this are nice – I noticed how many other game jam entries don’t save player state.

Scheduling

I feel I scheduled my time pretty well. I was careful to not work too hard during the first four or five days to keep my energy up. I got “enough” sleep, but not as much as I wanted. The only time I really questioned whether I would get something done was with the lightmapping on day 6. But it was not essential to the finished product.

Audio

I was able to find all the audio clips I needed (and music) from soundsnap.com, and in some cases from stuff I had recorded. For instance, the footstep sounds are part of my personal audio recordings collection. I cleaned up the sounds in Sony Vegas, and then plop them into the game.

Preparing footstep sounds from a raw recording in Sony Vegas

Preparing footstep sounds from a raw recording in Sony Vegas

One small flaw I see in Unity is the lack of sound design support. I know people often hate on XACT (which is one of the audio solutions for XNA), but it really is nice to be able to assemble sounds in a sound design app and then expose a series of knobs to tweak by the programmer. In Unity everything has to be done in code. One example is playing a random different footstep sound each time. There’s no reason this should need to be done in code – it should be part of the sound definition itself.

Visual design

Even though I’m no graphic design or artist, I am pleased with the way the game turned out visually. The main parts of the level are just cubes/squares/whatever. I wrote code to give them the proper UV channels (I seem to be forever clueless about UV unwrapping in Blender) and applied a very “spartan” texture in Unity. This, combined with lightmapping support from Unity, gives me a look I’m fairly pleased with. It can be hard for someone like me to come up with a consistent art style, but keeping things simple helps. Given that most of my world was done “programmatically”, I needed few models. I only purchased/acquired 5 of them I think.

Reflectors used for the sconces.

My game levels don’t have a ceiling, but I wanted to simulate light from the sconces bouncing off the ceiling onto the walls. I placed small quads above the sconces facing downward (invisible from above). Seemed to do the job.

Atmosphere

I’m pleased with the alternate relaxing/horror atmosphere I gave the game. Sometimes something terrible has apparently happened, but everything seems just hunky dory.

Game ending

I didn’t think of the game ending until day 5, so I suppose this was kind of fortuitous. But I think it ties it all together nicely. It was kind of random luck that I thought of the “twist”, I suppose.

Minimal UI

I’m a fan of making games with minimal UI, and I think the time it saves in tight deadlines like this is definitely an asset. The goal is that players can just figure out the game by playing it, without needing any tutorials.

What went wrong

Mechanics

In terms of game mechanics, I’m not 100% satisfied with what I came up with. I still think there could be some more fun mechanics in there. I feel I may have locked myself into certain gameplay elements too early. They say come up with 5 designs and then throw them all out and come up with another. I didn’t do that – I basically went with the first idea I got. In the end, I feel my game has more of a visual/atmosphere impact than a gameplay impact. Maybe that’s fine, but I still think I could have done better. I mean turning the lights and having the toys only move when it’s dark – it doesn’t really add anything to the gameplay. It just makes it a bit annoying (but it does add to the creepy atmosphere of the game). The other thing I’m not satisfied with is that while solving the puzzles you can get into an impossible-to-win situation, so you need to restart the level. It’s not a huge deal since the levels are small, but something like a rewind functionality could help this (not trivial to implement). When I saw this was frequent enough, I tried to write logic to detect when things have “gone bad” and automatically pops up a “reload level” button. Unfortunately I can’t detect all cases.

Oops, you needed that bear!

Oops, you needed that bear!

Play-testing

I was able to get very little feedback. Near the end when I was closer to shipping, I did get some friends to try it out. At least one of them – once they caught on and got to the later levels – seemed to really enjoy it.

Corrupted models

Nearly every model I downloaded from the web (even those I paid for) had problems. Often the transforms for the individual pieces would be wrong (IMG solider), or half the normals would be inside out. I had to fix up most of the models in Blender. Speaking of which,

Blender

I know it’s free (so I shouldn’t complain), but I really do have a love/hate relationship with it. The UI just seems to fight against what I want to do. I’m a big fan of standard user interfaces, and Blender just seems so counter-intuitive no matter how familiar I get with it. It’s also remarkably hard to find documentation, despite how widely used it is.

Code decay

Near the end I started slapping together scripts with more abandon. I suppose this isn’t too big a deal near the end of a gamejam project, but it did result in a some bugs, and some things that ended up being slightly more difficult-to-implement near the end. For example, I think I had 3 separate scripts trying to control the camera, and stomping each other. Part of this just comes from my inexperience with Unity and when to use which way (traverse hierarchy by name, search by tag, provide a gameobject instance to the script in the editor etc…) to access game objects. In the end this wasn’t a huge deal though.

Challenges

Here I’ll talk about some of the design challenges I faced.

Level geometry

I needed platforms to walk on, walls, and collision boundaries for the level. Fundamentally, everything could be inferred from a simple tile map that contained floors and walls. I wasn’t sure at first how to do that in Unity. I found the documentation for the class that lets you build meshes at runtime. But I wanted my level to be visible in the editor too. There’s probably a way to do this, but I ended up going with a solution where I modeled the level in Blender (I at least know how to extrude cubes, split faces from objects and such).

Basic geometry in Blender, and what it looks like imported into Unity with other game elements.

Basic geometry in Blender, and what it looks like imported into Unity with other game elements.

I only needed very simple texturing on the level geometry – basically, UVs projected along the x, y and z axes. I wasted a bit of time trying to do UV-unwrapping in Blender, but this just seemed over-complicated for what I wanted to do. I contemplated writing a shader in Unity that would calculate UVs based on world position. But I didn’t feel like delving into custom Unity shaders in the middle of a gamejam. In the end I wrote a Blender script that projects UV coordinates onto objects based on the face normals.

Tiled floor pattern I made using a desaturated texture generated with MapZone, and processed with photoshop

Tiled floor pattern I made using a desaturated texture generated with MapZone, and processed with photoshop

Script I wrote in Blender to calculate projected UV coordinates for the x, y and z axes:

import bpy

me = bpy.context.object.data
uv_layer = me.uv_layers.active.data
vertices = me.vertices

textureScale = 0.5
textureOffset = 0.5

for poly in me.polygons:
 print("Polygon index: %d, length: %d" % (poly.index, poly.loop_total))

 # range is used here to show how the polygons reference loops,
 # for convenience 'poly.loop_indices' can be used instead.
 for loop_index in range(poly.loop_start, poly.loop_start + poly.loop_total):
 print(" Vertex: %d" % me.loops[loop_index].vertex_index)
 vIndex = me.loops[loop_index].vertex_index
 theNormal = vertices[vIndex].normal
 pos = vertices[vIndex].co
 print(" Pos: %r" % pos)
 print(" norm: %r" % theNormal) 
 
 uvProj = (0,0)
 
 #figure out which direction the normal faces.
 if theNormal.x > theNormal.y:
 if theNormal.z > theNormal.x:
 # faces z direction
 uvProj = (pos.x * textureScale + textureOffset, pos.y * textureScale + textureOffset)
 else:
 # faces x direction
 uvProj = (pos.y * textureScale + textureOffset, pos.z * textureScale + textureOffset)
 else:
 if theNormal.z > theNormal.y:
 # faces z direction
 uvProj = (pos.x * textureScale + textureOffset, pos.y * textureScale + textureOffset)
 else:
 # faces y direction
 uvProj = (pos.x * textureScale + textureOffset, pos.z * textureScale + textureOffset)

 
 
 print("Old UV: %r" % uv_layer[loop_index].uv)
 uv_layer[loop_index].uv = uvProj
 print("New UV: %r" % uv_layer[loop_index].uv)

Movement and physics

My game design presented a challenge because I wanted both a fully 3d physics-based world, but also wanted discrete tile-based movement for the toys in the game. I debated whether or not to have some grid-based store of the current game state (enforcing one game entity at each square, for instance). I’m not a huge fan of duplicated state though, so I decided to just leverage the physics system. So when doing my discrete movement I decided to just perform raycasts one world unit in the appropriate direction and see if anything was it. This does have the advantage that things like opened/closed door blocking things “just works”. However, I did encounter issues with ray casts missing things, and casting from the wrong position (for some models, their transform position lay under the platform).

Movement is continuous for the player (on the right), but discrete for the other game objects.

Movement is continuous for the player (on the right), but discrete for the other game objects.

I ended up having to use sphere casts instead of raycasts, and being very careful to adjust the start point of the cast to an appropriate level about the ground. Overall it’s not a very robust system, so I would probably change it if I continue development on this game.

Visual indicators

When the lights are off, you can hear the bear moving. Some people don’t play with audio, or can’t hear it. In that case I wanted some visual indicator that corresponded to the bears’ movement, so that the player doesn’t need to guess how far they’ve moved. I had a lot of trouble coming up with a good design for this. At first I had dots across the bottom of the screen, but that ruined the immersive experience I thought. In the end I have a subtle spotlight on the player that rotates. One rotation is one bear step. It’s not very obvious though. In the updated version the spotlight pulses in addition to rotating. That’s a bit more obvious but it’s still not ideal.

Things I wanted to get done and bugs

Additional mechanics

Some additional mechanics I wanted to try were:

  • The player can point a light beam or move some glowing object around. It would stay lit when the lights go off, and would prevent bears (or whatever) from moving across its path. In the end I decided that I already had something that prevented bears from moving across an area (the soldiers), so I decided the extra implementation time wasn’t worth the possible extra puzzles
  • A bear or toy that moves towards the player when the lights are off, and kills him if it reaches him.

I would also like to perhaps have something else happening while the lights are off (i.e. instead of the player just waiting).

Bugs

The final submission for the competition had a few bugs. One made it possible to not be able to turn the light switch on again (the player movement is frozen when the lights are off). I couldn’t reproduce it, but it happened to a friend a few times. The light switch trigger depended on the position and direction of the player, and even though I stopped the player in his tracks when the light switch went off, I think due to update order there is a possibility that the player moves out of position the next frame. This wasn’t a robust way to do things, and I’ve fixed it in the updated version. Another bug was the lighting on the bear. I noticed in certain areas it was just not lit correctly. Have a look:   PM1   The lighting just seems flat and out of whack with the surroundings. The static objects are lightmapped, which means I’ve baked the global illumination into lightmap textures for the objects. As a result, no dynamic lighting is applied. This means for dynamic objects, such as the player or bears, we need to use light probes. I knew something was up, but I realize I could look at the light probe cells used by an object until after the compo was over. When I did, for a bear it looked like this:

The bear is using light probes scattered across the level.

The bear is using light probes scattered across the level.

The light probes used were scattered across the level, when they should have been the ones closest to the bear. It took me a while to figure out why. Only the center of the object’s bounding volume is used to determine which light probes are used, not the size. In this case though, that center turned out to be well below the bear, outside of the level volume covered by light probes. I’m not sure exactly why Unity chose those probes, but adjusting the Light Probe Anchor gave me this instead:   PM4   And now the bears look much better:   PM10   Another thing I noticed is that the lighting on my doors looked bad. There isn’t really a way around this with Unity’s current system though. Only a single interpolated light probe value is chosen for the entire mesh, so a flat mesh (like the door) will have the same lighting through out. I could have baked the door into the lightmaps, but then it would have looked incorrect when it was opened. The color of the door is an important gameplay element, so I suppose it’s ok that the lighting isn’t too interesting on it.

Summary

I’m not sure if I’ll continue work on this game – I suppose it might depend on how well it does in the competition. Are the mechanics strong enough to make an interesting game? What other things can I do to make the game interesting? A deeper story? More of an exploratory feel to the world? I know if I continued I’d want to streamline the level generation process (basically have more geometry generated automatically from a simple text file or something). Getting familiar with Unity was definitely a good move career-wise. From a programming perspective though, things are not very challenging. Gameplay programming barely feels like programming. I may look into shader programming in Unity next, because that interests me.

1 Comment

Week of Awesome II – day 6 (belated)

It’s actually the beginning of day 7 (12 hours left in the competition), but I never got around to this last night.

Stopped in my tracks

Yesterday morning didn’t go so well. I wanted to start work on baking lightmaps in Unity to get some kind of global illumination. The first thing I encountered, however, was that Unity would freeze when I started certain levels from the editor. Work was brought to an immediate halt as I tried in vain to debug this. Was my scene file corrupted somehow? I had recently (the night before) changed a bunch of model import settings needed to generate lightmaps. I started panicking a bit.

I caused some bugs while building this level last night.

I caused some bugs while building this level last night.

Some googling suggested this might just be a script that was caught in an infinite loop. It was hard for me to believe that this would cause the editor to permanently freeze. Surely there would be some facility to detect this in Unity? It wouldn’t be hard – just have another thread running that monitors where or not the main game thread has spent too long in a script’s Update. But that’s exactly what it was. While building my final cutscene the night before I had introduced an infinite loop in one of my scripts. I tracked it down by removing objects from my scene until the problem no longer repro’d. 1.5 hours wasted (though after finding out this could be due to a hanging script, I was able to find the cause fairly quickly). Two helpful reminders here:

  • I was making pretty substantial changes to core scripts – I should have tried out a few other levels periodically while doing this, as that would have narrowed the scope of the detective work I would have had to do
  • I had switched a bit into “hack stuff together” mode, and this bug was basically introduced because of that

Light maps

After getting basic lightmap baked and working, I set about figuring if my additional constraints made this feasible. In particular, I need to be able to switch to a “dark” lightmap when the lights go off in my game. After some poking around, I found a way to do this by setting null lightmaps from script. One problem solved.

Light probes in Unity

Light probes in Unity

I noticed that shadows from my one dynamic light had vanished. Or rather that they only appeared when my character walked under bright static lights (which makes no sense at all). This was the beginning of a 6 hour process of trying to get shadows to work properly. I ended up not being able to. I obviously have a flaw in my understanding of how lightmaps work in unity. Supposedly one directional dynamic light is supported with lightmaps – and it is, sort of. Unfortunately it looks like the final lighting term is (dynamic_directional_light + light_map * shadow_term) instead of (dynamic_directional_light * shadow_term + light_map). The only way I could get my character’s shadow to show up in slightly darker regions of the level was to pump up the shadow strength to full, which made the shadow edges look horrible and caused shadows to appear behind objects.

At any rate, I was about to throw in the towel when I decided to try simple projected “fake shadows”. They were quick to set up and looked decent enough (enough to “ground” the character). So I decided to go with it.

As it was now 6pm, I really doubted that I would have enough time to set up the lighting rigs in all 8 levels. This includes placing lights around the scene, and putting in light probes everywhere to light my dynamic objects. It turned out this process went much more quickly than I had anticipated though. It ended up taking only about 20 minutes per level.

GI with many lights using light maps on the left.

GI with many lights using light maps on the left.

Overall, this is the first really big struggle I’ve had with Unity (getting nice shadows to play well with lightmaps). I also find the light probe placement UI kind of finicky. Other than this, Unity has been an absolute joy to use. Most everything works robustly and works quickly.

I finished in good time and got to work on other polish.

Today

Today I have only minor fixes I need to make to get the final submission ready. So hopefully that will give me time to add some more polish and flourish. We’ll see how far I get! Today shouldn’t be nearly as stressful as yesterday though.

There are some noticeable artifacts in my light mapping, but I’m not going to bother fixing them. I tried various light map settings but they still appear. So maybe it’s a problem with the way my models are constructed?

Artifact

2 Comments

Week of Awesome – day 5 – When It’s Dark

Today was all about level design. After 3 simple “introductory” levels yesterday, I came up with 4 new levels today. Unfortunately several of them exposed bugs in my mechanics which I had to address … so it’s been a good 10 hours of work today.

Day5-1

There was a new mechanic that I was really excited about adding, but in the end I decided to cut it. It’s just not well fleshed-out enough (both in gameplay and in the visuals implementation) to make it in for this game jam.

Overall, this is what I accomplished:

  • I now track which levels are completed (for unlocking purposes on the main menu)
  • In making level 4, I found a bug in the shooting script and had to fix that
  • I improved the particle effect for the gun smoke
  • Level 5 worked out just fine, done in under one hour
  • Level 6 required some gameplay mechanic fixes. Unfortunately I’ve just discovered a flaw in level 6 that lets it be completed too easily. I can’t think of a quick fix right now, so I’ll have to spend time on this tomorrow.
  • I made the camera track player movement smoothly
  • I finished a level 7, but it doesn’t demonstrate anything terribly new. Again, I don’t have time for another mechanic I wanted to add.
  • I changed the coloration of some of the levels, for more variety. I’ll work harder on this when I try lightmapping this weekend.
  • I added logic to detect some cases where it becomes impossible to complete the level, and I pop up a “reload” button. Yes, there is some “figure out by dying” gameplay here.

Day5-2

Oh! and I now have a title: “When It’s Dark”. Not terribly original, but I needed something.

I’ve published a Unity web player version of the game here:

http://icefallgames.com/Games/WhenItsDark

3 Comments

Week of Awesome – day 4

Forcing myself to end this day a little early (it’s only 7:30pm!) so I can get some R&R and not get too burned out.

day4-1

Today was reasonably productive.

  • I tried to get svn working with Unity, but in the end I decided it wasn’t worth the time at this point
  • I changed my lighting setup so it’s not so harsh directional. Of course now I’m using 3 directional lights, and running into problems when Unity switches my spotlight to per-vertex instead of per-pixel. I’m hoping to be able to bake my lighting eventually, but we’ll see.
  • Saved some space by shrinking textures. Changed the material color for the main character
  • Changed some gameplay dynamics
  • Put in “level title” functionality
  • Wrote down very detailed steps on what meshes are required and what I need to do to them in Blender, so that they work nicely with my Unity game objects.
  • Made a simple GUI for when you’re in the level and need to return to the main menu, or reset the level.

So now I basically plan out the level on paper, and then start building it in Blender using my checklist. I’ve built the first 3 levels today, and It’s taking about an hour to create each (plus the time to fix bugs I encountered).

Blender workflow

Blender workflow

I’ve got some decent ideas for the next few levels. Hopefully in the end I’ll have around 8. Will I be able to get them all done tomorrow and then just work on polish? (And perhaps release a build).

Unity workflow

Unity workflow

2 Comments

Week of Awesome – day 3

So day 3 has come to an end, and still I haven’t begun level design in earnest. Oh well!

Day3

I still have some new games mechanics to flesh out, but I’m worried the levels won’t be interesting enough. At any rate, here’s what I accomplished today:

  • I found a GUI skin and made a main menu that can load an arbitrary number of levels (just one so far)
  • I implemented the soldier toy logic, which involves shooting. I found a particle effect in the Unity asset store that works for me
  • I implemented a door opening thing based on triggers/levers found in the level
  • I improved the walking animation and footstep sounds of the main character
  • I implemented the level exit functionality
  • I crystallized how I plan to implement my level files (walls, floors and collision layers being separate objects)
  • I spent too much time in Blender trying to make things
  • I plopped in a background image instead of just having black (not sure I’ll keep this)

I’m still not revealing too much about the gameplay yet. Perhaps by Friday I’ll have a test build I can sent out to people to try out (hopefully running in the Unity Web Player).

As for big issues I’m seeing (other than the level design), I’m worried about my lighting. I really want the game to look nice, and so I plan to use light mapping. I haven’t tested this out yet though to see how much work it will be. Perhaps I’ll trying that tomorrow. Currently I’m just using a single directional light (with shadows), and an ambient term. Things don’t look that good.

Just another WordPress site

Just another WordPress.com site

Harebrained Schemes

Developer's blog for IceFall Games

kosmonaut's blog

3d GFX and more

Halogenica

Turn up the rez!

bitsquid: development blog

Developer's blog for IceFall Games

Game Development by Sean

Developer's blog for IceFall Games

Lost Garden

Developer's blog for IceFall Games

Memories

Developer's blog for IceFall Games

Casey Muratori's Blog

Developer's blog for IceFall Games

Blog

Developer's blog for IceFall Games

Rendering Evolution

Developer's blog for IceFall Games

Simon schreibt.

Developer's blog for IceFall Games

Dev & Techno-phage

Do Computers Dream of Electric Developper?

- Woolfe -

Developer's blog for IceFall Games

Fabio Ferrara

Game Developer

Clone of Duty: Stonehenge

First Person Shooter coming soon to the XBOX 360

Low Tide Productions

Games and other artsy stuff...

BadCorporateLogo

Just another WordPress.com site

Sipty's Writing

Take a look inside the mind of a game developer.