Wednesday, November 24, 2010

Dual Quaternions... All the Way... Across the Vectorspace

I can't believe I'm saying this, but complex-numbers actually kinda make sense. Dual numbers and split-complex numbers confuse me.

However, I'm trying to get a handle on them because there seems to be a small new wave of skinning techniques coming from "dual quaternions": quaternions composed of 4 "dual numbers" instead of 4 Real numbers.

DISCLAIMER: I am by no means a pedantic mathematician, so I'm probably going to say something really wrong by the measure of what a mathematician would deem appropriate. I'm approaching this purely from an application and implementation standpoint.

So, back to the basic definition of a complex number: i is the "imaginary unit" such that i*i = -1. A coordinate in the complex plane is given by a Real coefficient and an imaginary one (i.e. 1 and i are basis vectors). This gives a + bi as a complex number.

Dual numbers and split-complex numbers are almost the same.... except e, the "dual unit", is defined as e*e = 0 and j, the "split-complex unit", is defined as j*j = 1. Wait, what? but that should give e = sqrt(0) and j = sqrt(1). Aren't those both 0 and 1, respectively?

Well it turns out that if you take e to be the matrix [[0, 1][0, 0]], you can get some useful properties out of it (covered here). It maintains the property that e*e is the 0-matrix, and allows you to have an "alternate complex plane" of two coefficients a and b such that a + be is a dual number.

Using dual numbers as components of quaternions instead of Real numbers allows you to introduce a translation component directly into the quaternion. How this alteration affects quaternion multiplication is demonstrated here. As far as I can tell, this type of transformation is called a "rigid transformation". I'm not sure why it's called that, but if you need to research things more, there's a keyword for you.

Since complex numbers can be distributed, having a quaternion defined as [v, s] such that v = [x + ae, y + be, z + ce] and s = [w + de], you can describe this "dual quaternion" by breaking it into two, regular quaternions where one is scaled by the dual unit: Q0 + eQ1 such that Q0 = [x, y, z, w] and Q1 = [a, b, c, d].

Hey, that's neat... just like the imaginary unit, we can pretty much ignore the dual unit and not attempt to represent it until we need to get meaningful info out of the quaternion!

So, how is this pertinent is for skinning? Instead of having an encompassing data-structure like a VQS transformation (V = vector = position, Q = quaternion = rotation, and S = scale), translation can be incorporated directly into the concatenation of Quaternions, much like linear transformations in matrices.

There are various papers that demonstrate the usefulness of this, elsewhere [Kavan et al. 2006]. In many instances, it cuts down on the number of operations you have to do since the translation is right there. It also naturally covers some pinching edge cases, just like quaternion double-cover does for blending.

It seems to have many benefits, so far, apart from some more mathematical complexity. Since when has mathematical complexity stopped game programmers from shaving off a few instructions? There's a catch (isn't there always?). They can't be linearly interpolated in a clean manner. Kavan et al. maintain that the only effective way to interpolate between dual quaternions is an extension of Slerp called "Screw Linear Interpolation" or "Sclerp". There does exist away to linearly interpolate, but it requires lots of silly finagling that is more costly than Sclerp-ing, itself. Directly interpolating the components is ineffective and can result in singularities. That's not cool.

That's a problem. Sclerp being the only effective means of interpolation means that:
  1. Animation blending is order-dependent, e.g Sclerp(Sclerp(Q0,Q1),Q2) != Sclerp(Q0,Sclerp(Q1, Q2)).
  2. The only method of blending is VERY expensive. Just like Slerp, Sclerp has a linear combination of sin() and cos(). If you've already approximated your animation to the frame-level with a fitted curve, you don't need hyper-correct, constant-velocity interpolation between each keyframe. It's unnecessary complexity.
So, while they're kinda nifty... I'm not so sure that they'll lead to anything for game developers until a better interpolation method is found. I definitely want to do more research to see if there is a better way to interpolate, and I just don't see it. Until then, VQS it is!

EDIT (11/25/10): There's a light at the end of the tunnel! I learned from Andy Firth and Christian Diefenbach of Bungie that you can effectively avoid the singularities of linearly blending dual quaternions. I'll post my findings one the fuller proofs of how that's possible. This makes me happy, dual quaternions seem like a very useful solution to skinning with animation blending, and it'd be a shame to not be able to use them for real-time applications

Wednesday, November 17, 2010

Bezier Spline Demo Videos

I put together some quick videos demonstrating some of my spline an motion-along-a-path work I discussed in this post.

I'm intending this to be used as a tool to allow artists to test their animation blending while the character is in motion.

NOTE: the character that exists in many of my demonstrations of this tool is Tad from an unrelated DigiPen game called Tad Studbody and the Robot Rampage (download here).

The green sphere at Tad's feet is the Center of Interest that controls his orientation along the path. It's simply the average of a point 1 meter ahead, two meters ahead, and 3 meters ahead (Tad's units make him much larger than a normal person).

You can see the shells (the blue lines) where I generate extra points to make the piecewise Bezier spline interpolate the input points. The smoothing factor is also demonstrated. A smoothing factor of 0 causes the path to become linear.

Bezier Spline Interface from Zak Whaley on Vimeo.

This is a path generated from the originally described algorithm where I fudged the first and last tangents by laying a chord between the first two points and the last two points.

Open Path Demo from Zak Whaley on Vimeo.

This is a closed path using a slightly modified version of the algorithm which says that the first and last tangent are parallel to the chord stretched between the second input point and the last input point.

Closed Path Demo from Zak Whaley on Vimeo.

Thursday, November 11, 2010

Milestone Update: Let There Be Post Processing

Cool! So I have a lot of  updates from just a couple of days ago...

Light Pre-Pass
All lighting issues that I was having before I resolved! That weird thing with the normals was due to recalculating the normal wrong.

So, what I was doing was saying Z was -(1 - abs(X) - abs(Y)) since I know that that magnitude of the normal should be 1. Given that all normals should be facing the camera and that DirectX is a left-handed system, you may safely assume that all Z components are negative. This is almost correct except I was being silly and needed to use the Pythagorean theorem. Therefore, Z should be -sqrt(1 - X*X - Y*Y).

Further, I was packing the X and Y of the normal incorrectly in the GBuffer. If they were negative they would be clamped to 0. I finally realized I needed to repack them as (X+1)/2 and (Y+1)/2 and unpack them as X*2 - 1 and Y*2 - 1. This would finally allow me to recalculate Z reliably and free up the Z component in the GBuffer to be conflated with the W component. That means I could pack the depth component across two channels allowing me 16-bit depth in my lighting calculation!

//Normal Z recalculation:

//Stored as [0, 1], convert to [-1, 1]
float3 lNormal = 2*lGBufferSample.xyz - float3(1,1,1);

//Z is guaranteed to be negative and to make the magnitude of the normal 1:
lNormal.z = -sqrt(1.f - lNormal.x*lNormal.x - lNormal.y*lNormal.y);

NOTE: I'm using ARGB8 for all my render targets for maximum compatibility with target machines.

The GBuffer where only XY of the normal are stored, and ZW contains the packed depth data.
Radius Based Attenuation
I really wanted to be able to have radius-based point lights; however, given the standard attenuation model of 1/(C + L*d + Q*d*d) where C, L, and Q are, respectively, the constant, linear, and quadratic components, being able to reasonably light a scene would require light hulls with massive radii. This completely defeats the purpose of light hulls: to reduce the amount of pixels being lit since lighting is a fillrate intensive operation.

I realized what I really wanted to was to be able to manually describe with the falloff looked like. What I did in the interim is use an Ease Function. This is a function that expects normalize input and gives normalized output. It's used to "ease" something "in" and "out" of a state (1 and 0, respectively). I defined my Ease Function as a simple quadratic and input the distance of a given fragment, normalizing it with respect to the light hull's radius. This output is my attenuation. As simple as that, and with no more computation than the typical attenuation model.
Exaggerated quadratic falloff of the Ease Function.
This opens up a whole new world of lighting models for me. I'm going to give my tool a control that defines the Ease Function as a quadratic Bezier curve. This would allow an artist or designer to describe exactly the falloff look they want.

Here's an example of what I plan for the control:
The distance is normalized and passed in as the "X" axis and results in an attenuation, the "Y" axis.
If the user really does want a more realistic attenuation model, standard 1/r^2 can even be modelled by pulling the control point into a 1/r^2 looking curve. Splines are super quick to calculate as much of the math can be precomputed before hand. It'll just boil down to a few multiplications and some additions when I'm done. The only difficult part will be how I turn a distance into a t-value as Bezier splines are defined in parametric form. I'll probably have to make a few assumptions and short-cuts about negatives, but I think it'll yield very intuitive and useful results for content developers.

Further, even if splines become too heavy, on saving out the light info, the curve could easily be baked to an Ease Function. Most lights won't change their attenuation, but for those that do, the overhead of dynamic attenuation will likely be acceptable.

Material Editing
Now that I've completed my post processing framework, I can finally start on my material editor. This will allow content developers to specify normal maps, specular maps, BRDF maps and the coefficients that control them.

Here's a quick run-through of my (limited) stages. The model in these images is the Alchemist from the Torchlight series. The Torchlight assets were published for modders by Runic Games. Thanks to the roughly 210 MB of assets, I've found many edge cases in my model converter. This allows my tool to be much more robust by being as useful as possible to the artists and designers who would use my tool.
The GBuffer with 2-channel packed depth. The Alchemist didn't have a normal map, so I'm just applying a cool junk normal map to him.

Lighting data constructed from the GBuffer.

Albedo combine with the light information.

A simple Bloom Lighting effect applied.

After AntTweakBar and debug UI are applied.

Tuesday, November 9, 2010

Reticulating Splines

I now have piece-wise Bezier splines that interpolate input points! I ran through many tutorials and equations to try and find a general form the produced these results, but nothing seemed to work.

What I finally ended up doing was based off of a heuristic I developed.

Initial Thought
My initial idea was to specify an "input tangent" (in the base case being the direction vector between the first two points) and an "output tangent" being parallel to the chord stretched between the previous point and the next point (thus making the tangent at the current point this line).

The final "output tangent" is simply defined as the vector from the second to last point and the last point. This takes care of all the necessary constraints to select a single spline from the infinite possibilities.

When these tangents have a positive dot product, you can construct an intersection point (making a triangle) using the previous and current points and the entrance and exit tangents. Then, to pick arbitrary piece-wise Bezier control points, just scale along the triangle's created sides by an arbitrary "smoothing factor". The larger the scalar, the closer the points are to the tip of the triangle (and intersection point), the smaller the scalar, the closer the Bezier curve is to a line.

This runs into a problem with the dot product between the two tangents is negative, the intersection of the points causes cusps in the spline. To solve this, I originally special-cased these segments by scaling along the entrance and exit tangents by the smooth factor, scaled by the distance between the points. I figured this heuristic made as much sense as any, even though it was arbitrary.

Realization
When I was describing my algorithm to Chris Peters (an instructor of mine), he noted that I could just do that simplistic heuristic for all points and avoid the parallel tangents case altogether.

I'm super excited, because my results look awesome, and are easily tweaked with a single magic number (that works best at 0.3)

Interpolating Bezier spline with smoothing = 0.3

As the smoothing approaches 0, the interpolating spline becomes the line segments between the points.

As the smoothing approaches 1 (each tangent being the whole distance between the points), generally undesirable results occur, but I left this possibility in for the user to decide.

The spline from the first picture without the shells.

Monday, November 8, 2010

B-Splines

Sweet! I love it when things just work. My old B-Spline code just dropped right in, flawlessly. Now I can do pathed animations!

Sunday, November 7, 2010

G-Buffer and Light Buffer

Yay! I'm almost done with my Light Pre-pass system. I need to implement light-hulls, right now I'm just doing CW culling and rendering every pixel greater than the light's bounding sphere. I have a few issues with my Light Buffer, though...

I'm storing the normal's X and Y in the R and G channels, and I'm packing the surface's depth in viewspace across Z and W. However, when I go to reconstruct Z from the normal, something doesn't seem quite right.

Am I wrong in assuming that, given a left-handed coordinate system, Normal.z = abs(Normal.x) + abs(Normal.y) - 1? I'm sure I can safely assume that all stored normals are pointed at me, so that guarantees a negative z-value for my normal.

The results are that at certain view-dependent angles, my model appears a uniform gray. Fortunately everything seems good for the most part.

Here's what the image looks like when Tad is washed out.

Oh well, off to bed for now. I'll see what comes tomorrow.

<3 Lighting

Thursday, November 4, 2010

Light Pre-Pass

I'm right in the middle of implementing a light pre-pass post-processing system. I'm a decent way there, I've just got some nasty issues with the depth buffer. Can't wait to post some pretty pictures of it!

The animation system is almost complete for it's basic implementation. Right now, you can drag-and drop new meshes, textures, and animations to build a new asset. Now I need to implement a basic UI and asset reference counting.

Thursday, October 28, 2010

Staging Area

I'm now beginning to see the benefits of having an intermediate sandbox area for editing assets.

My ultimate plan is to have reference counting be used to cull assets that aren't being used at all. As I don't have reference counting in yet, through all my demos and tests, my animation, model, and texture trees are growing massive, massive amounts. However, this is exposing something fundamentally flawed with how I was approaching my asset management that reference counting would merely cover up.

How would an artist want to use this tool? (1) To create a new resource for the game and (2) to edit an existing one. But wait, what's the full process of (1)?

>>Character artist:
Hmm... does this texture work? No. I need to up the value here...

>>Animator:
My arcs look terrible there. I'm going to go find a mirror and look silly for an hour. That'll fix everything.

>>Designer:
Oooh! Lemme try that animation. Screw that one. Or this other one. How about this one? Shit I need to redo that blend between the punch and the run.

Aaaand... Well, I don't like any of this. Cancel.

An artist isn't going to start out with final assets in creating the asset. It would be a pain to go through the process of creating a "new" asset, or "editing" an old one (and cancelling the changes later) just to see if your new animation/skin/rig, etc. works.

On top of that, someone like a designer, or an engineer doing stress tests, may try out lots of assets never intended to be in the final tree. It would be foolish to even consider placing them there and not in a temporary area.

So, there's this mode that I'm calling "Demo", an option (3) really. It guarantees no changes to the dev asset tree and the baked asset tree. It's basically a sandbox mode where you don't have to specify any names, whatsoever. Just toss things in and see how they look. Don't like it yet? Rework it and toss it in, live.

With my current system, it initially assumes that all assets being tossed in are intended to be in the final tree, then pruned if nothing ends up using them. This is one step too far, really, because the final asset isn't really defined.

So, what I've come up with are three, well-defined areas:

Dev Assets
  • Workable assets completed to some degree
  • Not necessarily final, but stable stages of a work-in-progress
  • These will be the live assets that artists will modify and re-check-in

Staged Assets
  • Local working copy of an artists proposed changes to an asset
  • Can be tossed at any time and reset from what's actually in the hierarchy
  • Can be committed to Dev and Baked asset trees
  • Sourced assets must be in the Dev Asset tree for hard-linking

Baked Assets
  • Touched only by the asset tool
  • Final game versions of the files
  • Tagged with the version control revision number
  • Hard-linked to the file in the Dev Asset tree

The difference between the Staged Assets and the sandboxy "demo" view is that Staged Assets require you to be sourcing actual, registered assets. The "demo" mode is intended for informal and rapid iteration of assets. These assets can be located anywhere on the user's computer, on a network drive, or any other location accessible through a path.

Having a required Dev Asset tree guarantees everyone has proper access to all asset files (none of them will get lost on a large team), automated rebaking is guaranteed, and a lookup of an asset that's in-game to its corresponding Max or Maya file is painless.

Wednesday, October 20, 2010

All is Fair in Love and Debugging

lol, hacks. I was needing to make sure that a function was only being called from two unique places. Since it was an operator, RMB > Find All References in Visual Studio wouldn't work so I just grabbed the first two unique return addresses and broke if it wasn't either of those.
//Make sure there are only two calling functions:
unsigned lEIP = 0;

//return address is [EBP + 4]
__asm
{
  mov ecx, [ebp + 4]
  mov [lEIP], ecx
}

static unsigned lFirstEIP = lEIP;

if(lFirstEIP != lEIP)
{
  //There should only be two functions that call this:
  static unsigned lSecondEIP = lEIP;

  if(lEIP != lFirstEIP && lEIP != lSecondEIP)
    __debugbreak();
}

Just goes to show that it doesn't matter what types of hacks you do while debugging. Do anything it takes to get the information you need. In debugging, there's no such thing as a bad hack :)

EDIT:
Oh yeah, and I also found out a while ago about the _ReturnAddress() intrinsic and StackWalk64(). These are MUCH more reliable tools to acquire this information.

Tuesday, October 19, 2010

Iterative Debugging

Sometimes when faced with a bug, I find it difficult to not plow right through, trying to figure out what logic is wrong. This fine, but needlessly slow compared to following some basic debugging patterns.

Let the logic tell you what logic is wrong. So many solutions to debugging come from binary searches. For example, binarily replacing changed code until you pinpoint what's wrong, binary searching revisions until you find when the bug was introduced, binarily (and recursively) blocking off chunks of code with a profiler to find the origin of a bottleneck.

What caused me to remember this (for the umpteenth time) now, was that I was able to use Ocaam's Razor on two seemingly identical pieces of code:

This works (copy-pasta from an example):
oQ.X() = (float) (lT0*iQ0.GetX() + lT1*iQ1.GetX());
oQ.Y() = (float) (lT0*iQ0.GetY() + lT1*iQ1.GetY());
oQ.Z() = (float) (lT0*iQ0.GetZ() + lT1*iQ1.GetZ());
oQ.S() = (float) (lT0*iQ0.GetS() + lT1*iQ1.GetS());

This doesn't:
oQ = lT0*iQ0 + lT1*iQ1;

By pairing down the problem set, half of the function by half of the function, I was able to determine in log2(n) steps (where n is a factor of lines of code) where my exact problem is. Now I know that either

  • my scalar multiplication of Quaternions is broken or
  • one of my constructors is broken
After double-checking my constructors and multiplication operators, it was a faulty Normalization function that was occurring each time a Quaternion was constructed. If I had just stared at it, I might have made the assumption that it was fine simply because it mathematically made sense.


Let logic logic for you.

And this is why your API should fail violently...

Animations aren't working any more. Crap, what changed? Let's see... I reorganized my animation system. Shit, I probably changed something wrong.

Debug. Debug. Step through. Print matrices. Wait... all the keyframes have the same transforms. What? Oh! Am I serializing out BoneTransform[0] instead of BoneTransform[i]? No... no because all the time deltas are correct.

It must be in the keyframe generation code. Damn, what did I change in there? Hmmm... well I changed the animation's name to something more reasonable than "Take001". But that wouldn't affe-- oh wait:

//Should be lTakeNames[i]->Buffer(),
//but this is always dumb: "Take001".
lAnimation->SetName(lFile.FileName());

//...

//Recalculate the animations of each bone:
for(unsigned j = 0; j < mSkeleton->mFbxBones.size(); ++j)
{
  CalculateKeyFrames(mSkeleton->mFbxBones[j],
  lAnimation.GetName().c_str(),
  lAnimation->GetTimelines()[j],
  mAxisConvertor,
  lStart,
  lKeyFrameTimes);
}

Shit, it's using the file name to request keyframe data instead of the take name...

//Should be lTakeNames[i]->Buffer(),
//but this is always dumb: "Take001".
lAnimation->SetName(lFile.FileName());

//...

//Recalculate the animations of each bone:
for(unsigned j = 0; j < mSkeleton->mFbxBones.size(); ++j)
{
  CalculateKeyFrames(mSkeleton->mFbxBones[j],
  //vvv THIS LINE DOWN HERE CHANGED
  lTakeName[i]->Buffer(),
  lAnimation->GetTimelines()[j],
  mAxisConvertor,
  lStart,
  lKeyFrameTimes);
}

Yes! That fixed it! ... WAIT... why the HELL did the FBX SDK not throw any warnings or assertions that I was requesting a take that didn't exist. It crashes on almost anything else that goes awry; why not this? Instead it repeatedly gave me the first keyframe on the global timeline.

Thanks, API.

Simple Bugs

This is just a quick post about a couple of silly bugs I found.


std::vector<FAIL>
In experimenting with the Ballmer Peak yesterday, I finally realized what was corrupting my serialized data. It was frustrating, because Every other value besides my animation key frames were serialized correctly. In viewing their data, only corrupt values could be seen. The correct number of keyframes were being serialized, but not the internal data.

I would watch it serialize out in one function call, and come back in in another. Different. What was different? My binary importer/exporter for my model system isn't a full-fledged serializer like my typical serialization systems. It's meant to be quick and dirty, and it bit me. It didn't support a Serialize() method on the object for recursive object definitions, it simply assumed POD types for everything hi-level which allowed what is tantamount to a memcpy.

The problem is, my data-structure contained an std::vector. I was serializing the pointers / junk members of the std::vector. This problem would have been hidden if it weren't for the fact that I was serializing data out and reading it right back it. That's because with deterministic execution and "dynamic base" turned off in your compiler, your pointers will usually end up being the same until user-input or random values interfere.


Only on the Surface
I always forget this... whenever you have any crashes related to DirectX, the HRESULTs only say "Invalid Call", crank up your DirectX warnings output in the control panel after switching to Debug runtimes.

I was getting a device reset crash in WM_EXITSIZEMOVE and for the life of me couldn't figure out what I wasn't reseting. After remembering to turn on debug runtimes, DirectX told me immediately: my render target surface wasn't being released and reacquired.

Sunday, October 17, 2010

Model Data

First it Was Easy
My Junior game, Æscher, didn't have any complex asset requirements, so my graphics engine just had managers for meshes, textures, and shaders. It then used a bucketed system to reduce the amount of context switching so that everything with the same shader was rendered, and in that context, everything with the same texture was rendered. Hardware instancing was used to make sure all object with the same mesh were rendered at once.

In a stupid-simplified version, think of it like this:
typedef std::vector<mesh> MeshBuckets;
typedef std::vector<meshbuckets> TextureBuckets;
typedef std::vector<texturebuckets> EffectBuckets;

//...
class GraphicsManager
{
  //...
  EffectBuckets mBuckets;
};

Then, you could loop through the hierarchy and keep the context switching to a minium (i.e. it's most expensive to switch shaders, followed by textures, then meshes):
void GraphicsManager::Render()
{
  for(unsigned e = 0; e < mBuckets.size(); ++e)
  {
    Effect   &lEffect = mEffectManager.GetByID(e);

    //Start shading:
    unsigned lPasses = lEffect.Begin();

    //For all TextureBuckets associated with Effect 'e':
    for(unsigned t = 0; t < mBuckets[e].size(); ++t)
    {
      //Specify the current texture:
      Texture  &lTexture = mTextureManager.GetByID(t);
      lEffect.SetTexture(lTexture.GetTexture());

      //For all MeshBuckets associated with Texture 't':
      for(unsigned m = 0; m < mBuckets[e][t].size(); ++m)
      {
        //Get the mesh for HW instancing:
        Mesh &lMesh = mMeshManager.GetByID(m);

        //Set up HW instancing stuff

        //Render each pass:
        for(unsigned p = 0; p < lPasses; ++p)
        {
          lEffect.BeginPass(p);
          
          //DirectX draw call

          lEffect.EndPass();
        }//End Passes
      } //End Meshes
    } //End Textures

    lEffect.End();
  }//End Effects
}

Not so Fast
However, with the new complexities I'm introducing, it's not as easy as that. Now I have skeletons and animations to manage as well as normal maps, per-instance materials, etc. I'll cross each bridge as I come to it, but right now I'm stuck on how I should handle my new mesh system.

My game only supports one mesh per object. I could extrapolate this to multiple meshes in an object, but that's something that can come later. It simplifies things for me to make this assumption.

A skeleton is something that each mesh needs to be skinned (if it's not static). Since there can be static objects, not all meshes have skeletons. However, for the objects that do have skeletons, I can't decide if it should even be in the Mesh data-structure or not.

Now, conceptually, yes, I should just toss it in there, because it would be silly to code the ability to "switch" skeletons on a mesh without the necessity for it. But that's not my point, my point is that for optimizations, I think it makes more sense for it to be in a separate manager (like my MeshManager) so that the HW instancing buffers can be more efficiently constructed instead of uselessly skipping over the skeleton data and nuking your cache.

Even more complex than that, is what about the animation data? I'm not sure I'm settled on it, but the solution I'm going for right now is to have the skeleton data with the mesh and have an AnimationManager. While technically this is worse for the HW instancing, it helps cache in the other areas when I need to access the skeleton for the skinning code right along side the mesh.

I should really do some profiling and try multiple ways, but first I should get it working one of the ways and then determine if it's even a bottle-neck.

So what's more important? 1) Cache, 2) memory overhead of another manager, 3) keeping things together that should conceptually be together? I know there's no true answer other than, "It depends." And that's the kicker.

First Milestone

Background
This is my final year at DigiPen in the RTIS (programmer track). I was really stoked for this year as I was getting to make a two-man game project with one of my favorite developers in my year, Ramon Zarazua. Not many students wanted to make a Senior game, they just wanted to internship-out their remaining GAM credits so I was very happy to have a team.

I was happy to have a nifty game project and a good workflow with my fellow dev. Unfortunately, as the cards would have it, Ramon hit personal issues and had to leave the team, leaving my two-man team to be just me.

Fortunately enough, however, I had already begun getting extremely excited by the content pipeline portion of our game, and had been planning out some grandiose things I would like to do, but would never be able to do since I also had a game to make.

Ramon leaving allowed me to switch gears, smoothly, into this new project. My asset management independent study is the result of this.

Content Pipeline
So, this is one of those things that there's no one definition for and, honestly, I'm not 100% sure, myself. The purpose of this project is to explore and discover that. However, I've got a descent idea of what I think a content pipeline should be and should do.

  • Material editing - An artist or designer should be able to quickly see what an asset looks like in the context of the game. No two lighting models look the same, and being able to modify shader attributes is a must for rapid iteration.
  • Animation blend editing - Blend systems tend to be per-title (from what I understand), so being able to specify it within the context of the engine is a must. There's no good way of viewing this otherwise.
  • Asset description - Creating / updating an asset and linking what model, animations, and material properties belong to it.
  • Orphaned assets - Old assets that have since been replaced by different textures take up precious space. They should be removed without human intervention.
  • Rebaking - Your model conversion system changed. Now all your assets are in an old, invalid format. You should be able to "rebake" all of these assets with 1-click. This also means a "development assets" needs to be kept as well as a "baked assets" tree.
  • Version control - Now that all your changes are made, they should be submitted to the build-server so everyone can see the modifications. Also, if this is done automatically, within the tool, the intricate details of your chosen Version Control System (VCS) don't need to be described to all of your tools' users.
First Milestone
It was strange losing some dev time to re-planning what I was going to do this semester and pitch it my teachers and the Dean's Office, but it was well worth it. This last Friday I had my first milestone presentation with Chris Peters, 1 week after I switched my plan.

It was by no means the best presentation I've given - it was pretty gut-wrenching seeing my VCS wrapper do silly things, and my drag and drop interface crash on things I was demonstrating, but overall I think it was a decent result after only spending a week on a new project.

I've got to say, it was difficult taking my old graphics engine code that assumed that all assets were statically loaded on startup and squish dynamic asset changing into it. That's part of what was causing it to crash, some of the DirectX mesh data wasn't properly being released and reacquired in the switch-over of the new asset.

Though I found a bunch of issues, and my product wasn't as strong as it could have been, I'm so very excited about this project I can't wait to fix these things and get on to user testing!

Now off to go fix some new Lost Device issues I've found... /codecodecode