2 Comments

Managing game object hierarchy in an entity component system

I thought I’d do a post on this, since there is little concrete information on the web about how to implement parent-child relationships in an ECS. I’ve been struggling with this for a while, and have finally developed something that appears reasonable (at least for the scenarios I think need to accomplish).

This will be a boring post with no pictures.

Ideally you want to be able to express several kinds of hierarchies (visual, inventory, teams, etc…). I’m mainly going to focus on visual hierarchy for this article, however. It’s the most complex and prevalent.

Some notes about transform hierarchy:

  • When I change the values in the parent’s transform, those need to result in the child’s final transform getting updated
  • We don’t ever want to get out-of-sync (e.g. draw a child in the wrong position)
  • Items at the root don’t need a local transform
  • I’d like to minimize memory usage (i.e. not store transforms if I don’t have to)

The first thing that is clear is that we have a distinction between local transform and what I’ll call final transform (the one we use for rendering). The value for final transform is equal to (local transform) * (parent’s final transform).

Do local and final transform belong together on a single component? That would make sense if all visual entities require both kinds of transforms. But entities at the root (which is most entities) don’t need a local transform – what would it be local to? I suppose we could have an identity matrix as the parent transform for entities without visual parents. This was an appealing idea at first because it implied a level of consistency between all entities. The obvious downside is that we are now storing a bunch of unneeded extra information for top level entities (which, again, is probably most entities).

Having top level entities without a local transform felt a little “dirty” to me, like a kind of code smell. I think this was because I was thinking that outside code then needs to be smart about whether it pushes values into local or final transforms. Any changes made to final transform will just be overwritten when the final transform is calculated from the local transform. So it would need to manipulate local transform instead – unless it was a top level entity without a local transform.

But more thinking and reading led me to believe that this isn’t a problem. “Outside code” is kind of a generic catch-all term. But what code is it that sets position, orientation, scale, etc? There’s the physics code – but as far as I can think, in my case it should always be operating on top level elements (and the “final transform”). Same for code that might be responsible for spawning items in the world. If my game allows equipping weapons or items (and needs to visually attach them to a player model) then that code would automatically be aware that it is attaching stuff to an existing entity. Thus it would know to use local transform. In short, I’m hard-pressed to come up with a scenario where code would have to make a choice between local or final transform.

The one exception is the editor – I want to be able to manipulate top level and child entities in the same manner. In this case, I manipulate to the final transform, and then look for a local transform and use the inverse of the parent’s final transform to update the local transform.

In any case, it now seemed fine to separate transform into Transform and LocalTransform.

Dirty state and updates

Ideally, I’d like not to have to update the final transforms every frame. Updating the transforms involves following the parent hierarchy up, which does not access memory in a cache-friendly manner.  This means I need to implement some kind of dirty state.

  • if the LocalTransform is dirty, the Transform also needs to be dirtied
  • if the Transform is dirtied, all child Transforms are also dirty

So we basically have a two stage “dirty bit propagation” requirement. Breaking it down like this makes it fairly straightforward to implement with a pair of systems. First, a system iterates through all LocalTransforms. If it finds one that is dirty, it looks for a Transform component on the same entity as the LocalTransform. It then marks this Transform as dirty.

Next, a second system iterates through all Transforms. If it finds a dirty one, it then looks for a TransformChildren component on that same entity. TransformChildren contains a list of entity ids for entities who have LocalTransforms that specify the current entity as the parent. It looks up Transform components on those entities and sets their dirty bit. It then needs to continue this recursively down the visual hierarchy. If my Transforms were sorted in visual hierarchy (i.e. like a binary tree), I could avoid the recursive propagation.

Finally, there is a system that resolves dirty transforms. This is where the logic lies that computes the values in Transform from the values in LocalTransform and the parent’s Transform.

All of these systems are fairly straightforward and not much code.

Update order

The systems described above run in the order they are described. Of course, it is possible that some system that runs shortly after will then modify a Transform somewhere that causes other child transforms not to be up-to-date. Subsequent systems may then consume these incorrect Transforms. This could also happen with a system that runs before. I don’t think that’s really avoidable though, unless I always keep all transforms up-to-date immediately (which would be a performance problem and would cause code dependencies that I’d rather not have).

TransformChildren

I kind of hastily introduced the TransformChildren component above. For every LocalTransform on an entity, the parent entity needs to have a TransformChildren that has the child entity in its child list. Any changes to one need to be reflected in the other, so you need to be careful about who modifies these components, and ensure both are updated in some atomic fashion. Similarly, for inventory, I have Inventory (on the parent) and InventoryParent (on the child – this specifies the parent to whom’s inventory it belongs).

Ideally I wouldn’t even need TransformChildren (or Inventory). This is essentially duplicated state. The needed information is contain wholly in the child entities’ components. But getting whe needed is not convenient. In fact, the only place TransformChildren is really needed is in the system that propagates dirty transforms. If I didn’t need this dirty state propagation, I could probably do without TransformChildren. So that’s something I might look into.

This is all still fairly theoretical at the moment, I need more game scenarios implemented to see how things fall out.

Leave a comment

Ambient occlusion for dynamic objects

In this previous post, I described how I implemented a primitive form of ambient occlusion for static objects. The gist of it is that I perform an offline rendering step that creates a texture describing the occlusion at each point in the world, at various heights above the terrain.

The thing lacking here is occlusion for moving objects such as the player or other NPCs. Here’s an example under cloudy skies (no directional light):

 

The character and the ground are muddled together. Contrast that to the static rock in the background, which "casts" an occlusion area around it.

The character and the ground are muddled together and it looks like the man is floating. Contrast that to the static rock in the background, which “casts” an occlusion area around it and appears more “grounded”.

 

My offline-generated static occlusion map is (currently) a large 1024 x 1024 texture. To accomplish occlusion for dynamic objects, I need to add their occlusion effect into that texture each frame.

What I actually do is create a much smaller occlusion map (at runtime) that just covers the area seen by the camera. Each frame, we first copy the relevant portion of the static occlusion map into this much smaller runtime occlusion map. Then, we blend in the occlusion effect of dynamic objects. Then we use this texture in our ambient lighting term, instead of the original large static occlusion map. There is obviously an extra performance cost, but it is negligible.

The effect is subtle (as it should be), but now our character appears much more “grounded”:

 

CharacterWithAO

 

Currently the dynamic occlusion is just a circular disc with a size and strength, but I could change it to other shapes if I needed. I made an Occluder component which can be added to any entity to cause it to “generate occlusion”.

When I first implemented this, I found that the resolution of my occlusion map was not sufficient for moving objects (the underlying occlusion was jittery as the object moved). Here’s the grid resolution:

 

Grid

 

So I just doubled the resolution of the runtime occlusion map (but not the original static map). Now the dynamic occlusion disks are drawn at twice the resolution, which is enough to look good.

It also had the nice effect of smoothing out the static part of the occlusion map (since it undergoes some filtering during the upscaling operation):

 

The griddy pattern in the occlusion map has been smoothed somewhat in the bottom image.

The coarse resolution of the static occlusion map has been smoothed somewhat in the bottom image.

 

 

2 Comments

Decals (deferred rendering)

I knew I needed something to spruce up my monotonous terrain, so I recently implemented a decal system. This lets me paste moss, leaves, footprints, bullet holes and anything else “flat” anywhere on my terrain (or any object in the world).

 

Some decals on a cliff and brick wall (admittedly pretty poor examples right now).

Some decals on a cliff and brick wall (admittedly pretty poor examples right now).

 

A deferred rendering engine offers an opportunity for easy-to-implement decals. Instead of trying generate decal geometry that matches the scene geometry (a very complicated proposition), we can just use information in our G-buffer to project the decal textures onto the scene.

Some literature:

 

The basic premise is to render a rectangular box enclosing the area where you want the decal projected. This geometry must be rendered after the regular G-buffer pass, since it requires the depth buffer (and possible the normal buffer). It could be done during the lighting pass, but then you’ll need to apply lighting to each decal.

Instead, my decals are rendered to the albedo part of the G-buffer, but using the data from the depth buffer. I also support normal-mapped decals, which means I might also render to the normal buffer. This has some implications which I’ll go into later on.

 

Decal projected onto a cliff, showing the box geometry we use to render it.

Decal projected onto a cliff, showing the box geometry we use to render it.

 

Note that you aren’t necessarily limited to rendering a rectangular box, it could be any geometry as long as it covers the area you need to, and doesn’t cover too much (thus resulting in the decal being projected over too large an area).

 

The decal box was too big, and ended up projecting the texture onto the white cylinder too.

The decal box was too big, and ended up projecting the texture onto the white cylinder too.

 

 

A rectangular box also provides a simple way to determine the texture coordinates for your decal sample. Given your depth buffer sample, you calculate the pixel’s world position. That position is then transformed back into the rectangular box’s object space using the inverse World matrix. From there we can just use the position’s xy coordinates to determine the decal texture coordinate.

 

	float4 objectPosition = mul(worldPositionFromDepthBuffer, InverseWorld);
	// objectPosition gives us a position inside (or not) of our 1x1x1 cube centered at (0, 0, 0).
	//  Reject anything outside.
	clip(0.5 - abs(objectPosition.xyz));
	// Add 0.5 to get our texture coordinates.
	float2 decalTexCoord = objectPosition.xy + 0.5;
	float4 color = tex2D(DecalTextureSampler, decalTexCoord);

 

 

Some leaf decals projected onto the ground.

Some leaf decals projected onto the ground.

 

Normal-mapped decals

Often you’ll want your decals to include a normal map. Otherwise they’ll “inherit” the normal information that is in your G-buffer.

 

The leaves on the right don't have a normal map associated with them, so they get the normal information from the underlying grass.

The leaves on the right don’t have a normal map associated with them, so they get the normal information from the underlying grass.

 

If you’re using a normal map, you obviously need a normal (not to mention a binormal and tangent). But you can’t sample from the normal buffer if you’re also writing to it (yes, you could make a copy of it – at some cost). The information in the depth buffer can give you the normal, however. Not only that, but there is a way to construct a tangent basis from this information too. I won’t go into the details here (maybe I’ll save that for a future post), but the link describes in great detail how to do this and provides a shader snippet.

One thing to note is that if you extract normals from the depth buffer, you’ll be getting “hard-edged” normals from the scene geometry. That’s not always desirable. As I mentioned before, you can use the normal buffer (if you make a copy if it) – although that has its own problems, since these are the “normal-mapped normals”, not the base smooth surface normals (which is ideally what you want, since this is what was used to generate the original image).

As an addtional alternative, I allow using a normal based off the actual decal box. This means the decal has a single constant normal all the way across. That’s fine when the surface being projected on is flat and the decal box is well-aligned with it. Here’s an image that shows the differences.

 

The decal-based normal has very obvious "edges", but in an actual-use case this would be mitigated by alpha blended edges.

The depth buffer-based normal shows the hard angles of the underlying geometry. The decal-based normal has very obvious “edges” at the edge of the decal, but in an actual-use case this would be mitigated by alpha blended edges.

 

I also support decals with just normal maps and no regular texture. I use this to have footprints in the snow, for instance:

 

Who left these?

Who left these?

Alpha blending

I was worried about how blending normals into the normal part of the G-buffer would work, and also blending in the sRGB color space. It turns out neither is a big visual problem.

I use the spheremap transform for storing my normals. I haven’t done a mathematical analysis, but it seems to behave fine when blending new normals in (i.e. there’s nothing too crazy going on on the edges of my decals).

Now is a good time to talk a bit more about my G-buffer layout. My albedo buffer also contains an emmissive channel (in A), and the normal buffer has specular intensity and power (in B and A). I simply turn off color writes to those channels when drawing decals.

Emmissive

Eventually I’ll support modification of the specular channels with decals, but I decided I definitely wanted modification of the emmissive channel now. This makes it easy to put mysterious glowing marks on cliffs, for instance.

 

What does it mean?

What does it mean?

 

I was worried that I wouldn’t be able to draw both into the color part of my albedo buffer (RGB) and into the emmissive part (A). This is because I need the alpha to control which parts of the decal are transparent. Or that I would have to draw the decal twice to achieve this.

However, as long as I’m ok with a constant emmissive value over the entire decal, I can do this will a single draw operation. The trick is to control the color blending with the (source) alpha channel, but control the alpha channel blending with the blend factor:

 

        BlendState blendStateTexture = new BlendState
        {
            AlphaBlendFunction = BlendFunction.Add,
            ColorSourceBlend = Blend.One,
            AlphaSourceBlend = Blend.BlendFactor,
            ColorDestinationBlend = Blend.InverseSourceAlpha,
            AlphaDestinationBlend = Blend.InverseSourceAlpha,
        };

 

Then when I draw, I set the GraphicsDevice’s BlendFactor to the emmissive value I require.

 

Restricting which objects are affected.

If you project a texture against a cliff, you don’t want a character to walk through the decal projection box and suddenly have the decal appear on the character. People solve this issue in various ways. One method might be to draw moving objects after your decal pass. The way I solve this is to write an object type id into my g-buffer. I have a channel reserved for this in my depth buffer (very convenient). So when the decal shader samples from the depth buffer, it can tell what kind of object that pixel is from. I can set the decal to match only certain types of objects. This is more flexible than a simple dynamic/static choice and imposes less restrictions on render order. I can, for instance, have a decal projected onto the ground but not affect low-to-the-ground shrubs.

 

Neither the character nor the cactus is affected by the decal on the ground, despite being inside the decal projection box.

Neither the character nor the cactus is affected by the decal on the ground, despite being inside the decal projection box.

 

I’ll try to follow this up in a while with a post giving more details on the tangent space reconstruction, perhaps some shader instruction counts, and any other issues I come across (like fading out the decal texture when it’s projected at oblique angles, which I haven’t addressed yet).

Leave a comment

More trees and texture tricks

I spent a while making some improvements to my tree editor. Mostly just workflow cleanup, but I also added a few new features. One involved more control over which leaf textures are used for particular leaves, and the other involves removing triangles where the texture is completely transparent. You can see that in the following palm tree: the rectangular leaf grids have their corners chopped off where there are no opaque pixels.

 

TreeExample

 

I used these improvements as I was creating a new definition for a palm tree. Here’s a scene with 4 of the generated models:

 

Scene1

 

Scene2

 

I’m still not super- satisfied with the lighting on these, but I’m not sure what is missing.

One thing I had been neglecting for a while were specular maps for vegetation. I didn’t think the improvement in visual quality was sufficient enough to incur the cost of an extra texture sample.

Could I squeeze it into one of the color channels? I recall someone told me they had done something similar with their terrain. They discarded the blue channel for actual color, and used it and and the alpha channel to store a normal map. Blue was calculated based on the red and green channels. Since most ground textures are green or brown, very little information is lost.

I figured the same was true for vegetation, so I added support for that to the shader and model processing pipeline. I can now replace the blue channel of the diffuse texture with a monochrome specular map. Can you tell the difference?

 

PalmWithAndWithOutSpecInDiffuse

 

The tree on the (highlight to read ->) right has no information in its blue channel. We calculate the blue channel based on the red and green channels and a shader constant that is determined at compile time when we analyze the texture.

 

Colors

 

The original texture is on the left. The specular map is in the middle. The texture on the right is actually what gets used. The blue channel has been replaced with the specular component.

To calculate the actual diffuse color in the shader, we just do:

float4 diffuse = tex2D(DiffuseSampler, texCoords);
...
diffuse.b = dot(diffuse.rgb, BlueFromColor);

BlueFromColor is a float3, because the same shader is used for when there is no specular map stashed in the diffuse – in that case, BlueFromColor is (0, 0, 1). When we do have a specular map, the third component will be zero, and the first two will be non-zero.

 

 

 

 

 

Leave a comment

Graphics debugging aids

Just thought I’d make a quick, very graphical, post about various tools I have in my engine to better visualize things.

 

Wireframe mode. Pretty straightforward.

Wireframe mode. Pretty straightforward.

The 3 main components of water, and the final result.

The 3 main components of water, and the final result.

Visualizing specular light only.

Visualizing specular light only.

The 16 bit (linear) light accumulation buffer for HDR.

The 16 bit (linear) light accumulation buffer for HDR.

Disabling the diffuse buffer in my deferred renderer's G-buffer.

Disabling the diffuse buffer in my deferred renderer’s G-buffer.

Showing collision primitives

Showing collision primitives

Ambient light with the diffuse color disabled.

Ambient light with the diffuse color disabled.

Objects rendered taking their color from the world albedo estimate map (used to color ambient lighting).

Objects rendered taking their color from the world albedo estimate map (used to color ambient lighting).

 

Similar to the albedo map above, but for the "low frequency" ambient occlusion texture.

Similar to the albedo map above, but for the “low frequency” ambient occlusion texture.

Leave a comment

Latest screenshots

After the fixes to my HDR pipeline and albedo, the ambient lighting/global illumination, ambient occlusion map, and various other tweaks, I thought I’d show some screenshots of the results.

 

I like the detail still visible in the shadowed areas of the colorful plants.

I like the detail still visible in the shadowed areas of the colorful plants.

 

The snow is now textures, instead of taking the normal map from the underlying terrain (which looked terrible). Taking into account the bright snowpack when calculating ambient light helps brighten the dark cliffs. The inset shows what it would be like if I used a fixed scene albedo instead (i.e. too dark).

The snow is now textured, instead of taking the normal map from the underlying terrain (which looked terrible). Taking into account the bright snowpack when calculating ambient light helps brighten the dark cliffs. The inset shows what it would be like if I used a fixed scene albedo instead (i.e. too dark).

 

Good detail in the shaded part.

Good detail in the shaded part.

 

I have a mode that provides a bit of a vignette in dim night light, helping to create the feel of a dark night while still being able to see things (this isn't new, I just thought it looked nice).

I have a mode that provides a bit of a vignette in dim night light, helping to create the feel of a dark night while still being able to see things (this isn’t new, I just thought it looked nice).

 

Slight darkening along the base of the wall, visible mainly in the shadows.

Slight darkening along the base of the wall, visible mainly in the shadows.

 

Fog in the desert isn't realistic, but here we are.

Fog in the desert isn’t realistic, but here we are.

1 Comment

Global illumination – improving my ambient light shader

In this post I’ll go over the modifications I’ve made to my ambient light shader to have more realistic “global illumination”.

First let’s look at some of the problems:

 

MissingBounce

 

What’s wrong in this image? The sun is shining on the bright orange cliff on the right, the but the shadowed cliff wall on the left is ignorant of this.

The article here describes a cheap way to mimic this effect: add another light that points in the opposite direction of the sun.

Bounce lighting

So let’s try this out. One place this quickly falls apart is when the sun is shining directly down. The “bounce light” will be coming from below. What happens to our birch trees, which were nicely lit during sunrise? Have a look, the bottom image is with the sun shining down:

 

Overhead

 

The trunks are unrealistically dark because the bounce lighting isn’t really doing anything here: It’s coming from below, and so vertical surfaces are not lit. Granted, the article mentioned above (which is careful to note that its techniques are quick and dirty for demos) says that you should make the bounce light horizontal. When the sun is mostly overhead though, which horizontal direction do you choose?

But ok – let’s see what it looks like when the sun is at a lower angle (where a horizontal bounce direction makes more sense). Here’s a directional (sun) light only, near sunrise:

 

DOnly

 

 

Now let’s put another directional light from the opposite direction with (Suncolor * SceneAlbedo). This is our “bounce light”:

 

BounceOnly

 

And now combined:

 

BothDAndBounce

 

Well that’s nice, except we see there is a dark area on the tree trunk where it is affected by neither light. That might be accurate if you were holding a mirror up to the sun, but not when the sun reflected off the whole scene. One option is to have the bounce light “wrap around” past 90 degrees a bit. That does do a decent job of fixing the above two issues.

But let’s take a closer look at what is actually happening: Sun is shining down, and lights up the scene. Let’s make a simple assumption that the scene is flat ground:

 

SunShineBounceDiagram

 

Assuming the sun is lighting up an “infinite” area around the tree, the bounce light affecting the tree will basically be a hemispheric light. It’s like the tree is lit by a cubemap, the bottom half of which is (SunColor * SceneAlbedo) modulated by the angle at which the sun is hitting the ground, and the top half of which is black (when considering only the bounce lighting, not the original sunlight).

This provides us with a little bit more real physical basis for the “wrap around” I mentioned for the bounce light:

 

float3 bounceLightColor = sunlightColor * sceneAlbedo * saturate(dot(sunDirection, groundUpVector));
float3 bounceLight = bounceLightColor * dot(surfaceNormal, groundUpVector) * 0.5 + 0.5;

 

Now the trunks are lit up by sunlight bouncing off the ground:

 

HemisphereBounceLight

 

Now back to the canyon image at the beginning of this article. We wanted the light reflected off the opposite canyon wall to light up the shadowed wall a bit. Well, this hemispheric light isn’t going to do much for that – our light is coming from the ground hemisphere, and we’re essentially ignoring the sun direction (or rather, it’s only being used to determine the strength of the sunlight reflecting off the ground).

To get around this problem, we can use a modified ground plane. We’ll tilt it slightly towards the sun. This arguably has a physical basis – the sun isn’t just reflected off flat ground, it’s reflected off of “above ground” objects like other trees and rocks. Yes, there are objects on both sides of the point we’re lighting, but only the ones on the opposite side of the sun really contribute to bounce lighting.

 

TiltedGroundNormal

Tilted ground surface for bounce lighting.

 

The tilted ground normal is calculated like so:

Vector3 tiltedGroundNormal = Vector3.Normalize(actualGroundNormal * C_VALUE + directionToSun);

 

I use a value of 0.75 for C_VALUE currently.

 

So here’s one of those cliffs again, showing the directional light and with and without the bounce light (I’ve also turned off the overhead hemispheric ambient light, so that’s why the top image is completely dark in the shadowed areas):

 

WithAndWithoutBounce

 

 Scene albedo

I’ve used the term scene albedo a number of times so far. What is it? Well, I started off using a brownish green color with an albedo of about 0.2. 0.2 is a typical value for the grass, dirt, cliffs, etc.

I knew using a fixed scene albedo would be wrong, but the bounced light is a fairly minor component compared to the sun itself, so it’s not terribly noticeable. Except when the actual ground has a very different  albedo – such as snow.

 

FixedSceneAlbedoSnow

On the left, the birch trunks are nicely lit by the rising sun. At noon, on the right, the trunks are way too dark, even though they should be lit by our bounce hemispheric light.

 

So I decided it was time to make a world albedo map. This is done by rendering the world from above, chunk by chunk, stitching it all together, and blurring it significantly. Snowfall is dynamic in my world though (what a problem that is!), so I also need to make a rough estimation of how much a particular piece of ground is affected by snow (e.g. water and cliffs aren’t), and then combine that with the current snowpack in that area. The result of doing this is that the trees are much better lit in bright snow:

 

VariableSceneAlbedoSnow

 

Just for kicks, let’s compare some test objects: one out in the water vs one in the snow:

 

TwoCylinders

 

 

 

These are backlit, so most of the light we’re seeing on them comes from our ambient term (which includes scene albedo). If we compare them side by side, we can see the significant different in brightness:

 

TwoCylindersCompare

 

So this is a very very rough approximation of objects inheriting some color from their surroundings. If I paint the ground bright colors, it’s more noticeable:

 

InheritColor

 

It’s important to note that this is not directional though. The cylinder in the middle is a mix of green and magenta even though from this angle it would be greenish in the real world. For a directional solution we could use spherical harmonics. I describe my experience with that here.

The other ambient term

So far we’ve been talking about the sunlight bounce term. There’s also the ambient term that describes how the object is lit by the sky (as opposed to the sun/moon). With everything we’ve talked about so far, this one becomes pretty obvious. For this I use a hemispheric light that goes from the SkyColor on top to SkyColor * SceneAlbedo on the bottom. This is basically what I did before, except now we have the benefit that it gets some color from its surroundings.

Let’s finish with a foggy nighttime scene with the dark sides of the conifers lit by bounced moonlight:

 

Night

 

 

Oh wait, one more. The following image shows our three main lighting terms, and the final image with them composed together:

 

Demo

 

Developer's blog for IceFall Games

Casey's Blog

Developer's blog for IceFall Games

Coherent Labs » Blog

Developer's blog for IceFall Games

Developer's blog for IceFall Games

Simon schreibt.

Developer's blog for IceFall Games

Dev & Techno-phage

Do Computers Dream of Electric Developper?

- Woolfe -

Developer's blog for IceFall Games

Ferrara Fabio

Game & Application Developer, 3D Animator, Composer.

Clone of Duty: Stonehenge

First Person Shooter coming soon to the XBOX 360

Low Tide Productions

Games and other artsy stuff...

BadCorporateLogo

Just another WordPress.com site

Sipty's Writing

Take a look inside the mind of a young game developer.

Jonas Kyratzes

Writer, game designer, filmmaker.

Indie Gamer Chick

Indie Gaming Reviews and Editorials

The Witness

Developer's blog for IceFall Games

Developer's blog for IceFall Games

Developer's blog for IceFall Games

I bake games

Developer's blog for IceFall Games

A Random Walk Through Geek-Space

Brain dumps and other ramblings

The ryg blog

When I grow up I'll be an inventor.

Follow

Get every new post delivered to your Inbox.

Join 293 other followers