Wednesday, November 24, 2010

Dual Quaternions... All the Way... Across the Vectorspace

I can't believe I'm saying this, but complex-numbers actually kinda make sense. Dual numbers and split-complex numbers confuse me.

However, I'm trying to get a handle on them because there seems to be a small new wave of skinning techniques coming from "dual quaternions": quaternions composed of 4 "dual numbers" instead of 4 Real numbers.

DISCLAIMER: I am by no means a pedantic mathematician, so I'm probably going to say something really wrong by the measure of what a mathematician would deem appropriate. I'm approaching this purely from an application and implementation standpoint.

So, back to the basic definition of a complex number: i is the "imaginary unit" such that i*i = -1. A coordinate in the complex plane is given by a Real coefficient and an imaginary one (i.e. 1 and i are basis vectors). This gives a + bi as a complex number.

Dual numbers and split-complex numbers are almost the same.... except e, the "dual unit", is defined as e*e = 0 and j, the "split-complex unit", is defined as j*j = 1. Wait, what? but that should give e = sqrt(0) and j = sqrt(1). Aren't those both 0 and 1, respectively?

Well it turns out that if you take e to be the matrix [[0, 1][0, 0]], you can get some useful properties out of it (covered here). It maintains the property that e*e is the 0-matrix, and allows you to have an "alternate complex plane" of two coefficients a and b such that a + be is a dual number.

Using dual numbers as components of quaternions instead of Real numbers allows you to introduce a translation component directly into the quaternion. How this alteration affects quaternion multiplication is demonstrated here. As far as I can tell, this type of transformation is called a "rigid transformation". I'm not sure why it's called that, but if you need to research things more, there's a keyword for you.

Since complex numbers can be distributed, having a quaternion defined as [v, s] such that v = [x + ae, y + be, z + ce] and s = [w + de], you can describe this "dual quaternion" by breaking it into two, regular quaternions where one is scaled by the dual unit: Q0 + eQ1 such that Q0 = [x, y, z, w] and Q1 = [a, b, c, d].

Hey, that's neat... just like the imaginary unit, we can pretty much ignore the dual unit and not attempt to represent it until we need to get meaningful info out of the quaternion!

So, how is this pertinent is for skinning? Instead of having an encompassing data-structure like a VQS transformation (V = vector = position, Q = quaternion = rotation, and S = scale), translation can be incorporated directly into the concatenation of Quaternions, much like linear transformations in matrices.

There are various papers that demonstrate the usefulness of this, elsewhere [Kavan et al. 2006]. In many instances, it cuts down on the number of operations you have to do since the translation is right there. It also naturally covers some pinching edge cases, just like quaternion double-cover does for blending.

It seems to have many benefits, so far, apart from some more mathematical complexity. Since when has mathematical complexity stopped game programmers from shaving off a few instructions? There's a catch (isn't there always?). They can't be linearly interpolated in a clean manner. Kavan et al. maintain that the only effective way to interpolate between dual quaternions is an extension of Slerp called "Screw Linear Interpolation" or "Sclerp". There does exist away to linearly interpolate, but it requires lots of silly finagling that is more costly than Sclerp-ing, itself. Directly interpolating the components is ineffective and can result in singularities. That's not cool.

That's a problem. Sclerp being the only effective means of interpolation means that:
  1. Animation blending is order-dependent, e.g Sclerp(Sclerp(Q0,Q1),Q2) != Sclerp(Q0,Sclerp(Q1, Q2)).
  2. The only method of blending is VERY expensive. Just like Slerp, Sclerp has a linear combination of sin() and cos(). If you've already approximated your animation to the frame-level with a fitted curve, you don't need hyper-correct, constant-velocity interpolation between each keyframe. It's unnecessary complexity.
So, while they're kinda nifty... I'm not so sure that they'll lead to anything for game developers until a better interpolation method is found. I definitely want to do more research to see if there is a better way to interpolate, and I just don't see it. Until then, VQS it is!

EDIT (11/25/10): There's a light at the end of the tunnel! I learned from Andy Firth and Christian Diefenbach of Bungie that you can effectively avoid the singularities of linearly blending dual quaternions. I'll post my findings one the fuller proofs of how that's possible. This makes me happy, dual quaternions seem like a very useful solution to skinning with animation blending, and it'd be a shame to not be able to use them for real-time applications

Wednesday, November 17, 2010

Bezier Spline Demo Videos

I put together some quick videos demonstrating some of my spline an motion-along-a-path work I discussed in this post.

I'm intending this to be used as a tool to allow artists to test their animation blending while the character is in motion.

NOTE: the character that exists in many of my demonstrations of this tool is Tad from an unrelated DigiPen game called Tad Studbody and the Robot Rampage (download here).

The green sphere at Tad's feet is the Center of Interest that controls his orientation along the path. It's simply the average of a point 1 meter ahead, two meters ahead, and 3 meters ahead (Tad's units make him much larger than a normal person).

You can see the shells (the blue lines) where I generate extra points to make the piecewise Bezier spline interpolate the input points. The smoothing factor is also demonstrated. A smoothing factor of 0 causes the path to become linear.

Bezier Spline Interface from Zak Whaley on Vimeo.

This is a path generated from the originally described algorithm where I fudged the first and last tangents by laying a chord between the first two points and the last two points.

Open Path Demo from Zak Whaley on Vimeo.

This is a closed path using a slightly modified version of the algorithm which says that the first and last tangent are parallel to the chord stretched between the second input point and the last input point.

Closed Path Demo from Zak Whaley on Vimeo.

Thursday, November 11, 2010

Milestone Update: Let There Be Post Processing

Cool! So I have a lot of  updates from just a couple of days ago...

Light Pre-Pass
All lighting issues that I was having before I resolved! That weird thing with the normals was due to recalculating the normal wrong.

So, what I was doing was saying Z was -(1 - abs(X) - abs(Y)) since I know that that magnitude of the normal should be 1. Given that all normals should be facing the camera and that DirectX is a left-handed system, you may safely assume that all Z components are negative. This is almost correct except I was being silly and needed to use the Pythagorean theorem. Therefore, Z should be -sqrt(1 - X*X - Y*Y).

Further, I was packing the X and Y of the normal incorrectly in the GBuffer. If they were negative they would be clamped to 0. I finally realized I needed to repack them as (X+1)/2 and (Y+1)/2 and unpack them as X*2 - 1 and Y*2 - 1. This would finally allow me to recalculate Z reliably and free up the Z component in the GBuffer to be conflated with the W component. That means I could pack the depth component across two channels allowing me 16-bit depth in my lighting calculation!

//Normal Z recalculation:

//Stored as [0, 1], convert to [-1, 1]
float3 lNormal = 2*lGBufferSample.xyz - float3(1,1,1);

//Z is guaranteed to be negative and to make the magnitude of the normal 1:
lNormal.z = -sqrt(1.f - lNormal.x*lNormal.x - lNormal.y*lNormal.y);

NOTE: I'm using ARGB8 for all my render targets for maximum compatibility with target machines.

The GBuffer where only XY of the normal are stored, and ZW contains the packed depth data.
Radius Based Attenuation
I really wanted to be able to have radius-based point lights; however, given the standard attenuation model of 1/(C + L*d + Q*d*d) where C, L, and Q are, respectively, the constant, linear, and quadratic components, being able to reasonably light a scene would require light hulls with massive radii. This completely defeats the purpose of light hulls: to reduce the amount of pixels being lit since lighting is a fillrate intensive operation.

I realized what I really wanted to was to be able to manually describe with the falloff looked like. What I did in the interim is use an Ease Function. This is a function that expects normalize input and gives normalized output. It's used to "ease" something "in" and "out" of a state (1 and 0, respectively). I defined my Ease Function as a simple quadratic and input the distance of a given fragment, normalizing it with respect to the light hull's radius. This output is my attenuation. As simple as that, and with no more computation than the typical attenuation model.
Exaggerated quadratic falloff of the Ease Function.
This opens up a whole new world of lighting models for me. I'm going to give my tool a control that defines the Ease Function as a quadratic Bezier curve. This would allow an artist or designer to describe exactly the falloff look they want.

Here's an example of what I plan for the control:
The distance is normalized and passed in as the "X" axis and results in an attenuation, the "Y" axis.
If the user really does want a more realistic attenuation model, standard 1/r^2 can even be modelled by pulling the control point into a 1/r^2 looking curve. Splines are super quick to calculate as much of the math can be precomputed before hand. It'll just boil down to a few multiplications and some additions when I'm done. The only difficult part will be how I turn a distance into a t-value as Bezier splines are defined in parametric form. I'll probably have to make a few assumptions and short-cuts about negatives, but I think it'll yield very intuitive and useful results for content developers.

Further, even if splines become too heavy, on saving out the light info, the curve could easily be baked to an Ease Function. Most lights won't change their attenuation, but for those that do, the overhead of dynamic attenuation will likely be acceptable.

Material Editing
Now that I've completed my post processing framework, I can finally start on my material editor. This will allow content developers to specify normal maps, specular maps, BRDF maps and the coefficients that control them.

Here's a quick run-through of my (limited) stages. The model in these images is the Alchemist from the Torchlight series. The Torchlight assets were published for modders by Runic Games. Thanks to the roughly 210 MB of assets, I've found many edge cases in my model converter. This allows my tool to be much more robust by being as useful as possible to the artists and designers who would use my tool.
The GBuffer with 2-channel packed depth. The Alchemist didn't have a normal map, so I'm just applying a cool junk normal map to him.

Lighting data constructed from the GBuffer.

Albedo combine with the light information.

A simple Bloom Lighting effect applied.

After AntTweakBar and debug UI are applied.

Tuesday, November 9, 2010

Reticulating Splines

I now have piece-wise Bezier splines that interpolate input points! I ran through many tutorials and equations to try and find a general form the produced these results, but nothing seemed to work.

What I finally ended up doing was based off of a heuristic I developed.

Initial Thought
My initial idea was to specify an "input tangent" (in the base case being the direction vector between the first two points) and an "output tangent" being parallel to the chord stretched between the previous point and the next point (thus making the tangent at the current point this line).

The final "output tangent" is simply defined as the vector from the second to last point and the last point. This takes care of all the necessary constraints to select a single spline from the infinite possibilities.

When these tangents have a positive dot product, you can construct an intersection point (making a triangle) using the previous and current points and the entrance and exit tangents. Then, to pick arbitrary piece-wise Bezier control points, just scale along the triangle's created sides by an arbitrary "smoothing factor". The larger the scalar, the closer the points are to the tip of the triangle (and intersection point), the smaller the scalar, the closer the Bezier curve is to a line.

This runs into a problem with the dot product between the two tangents is negative, the intersection of the points causes cusps in the spline. To solve this, I originally special-cased these segments by scaling along the entrance and exit tangents by the smooth factor, scaled by the distance between the points. I figured this heuristic made as much sense as any, even though it was arbitrary.

Realization
When I was describing my algorithm to Chris Peters (an instructor of mine), he noted that I could just do that simplistic heuristic for all points and avoid the parallel tangents case altogether.

I'm super excited, because my results look awesome, and are easily tweaked with a single magic number (that works best at 0.3)

Interpolating Bezier spline with smoothing = 0.3

As the smoothing approaches 0, the interpolating spline becomes the line segments between the points.

As the smoothing approaches 1 (each tangent being the whole distance between the points), generally undesirable results occur, but I left this possibility in for the user to decide.

The spline from the first picture without the shells.

Monday, November 8, 2010

B-Splines

Sweet! I love it when things just work. My old B-Spline code just dropped right in, flawlessly. Now I can do pathed animations!

Sunday, November 7, 2010

G-Buffer and Light Buffer

Yay! I'm almost done with my Light Pre-pass system. I need to implement light-hulls, right now I'm just doing CW culling and rendering every pixel greater than the light's bounding sphere. I have a few issues with my Light Buffer, though...

I'm storing the normal's X and Y in the R and G channels, and I'm packing the surface's depth in viewspace across Z and W. However, when I go to reconstruct Z from the normal, something doesn't seem quite right.

Am I wrong in assuming that, given a left-handed coordinate system, Normal.z = abs(Normal.x) + abs(Normal.y) - 1? I'm sure I can safely assume that all stored normals are pointed at me, so that guarantees a negative z-value for my normal.

The results are that at certain view-dependent angles, my model appears a uniform gray. Fortunately everything seems good for the most part.

Here's what the image looks like when Tad is washed out.

Oh well, off to bed for now. I'll see what comes tomorrow.

<3 Lighting

Thursday, November 4, 2010

Light Pre-Pass

I'm right in the middle of implementing a light pre-pass post-processing system. I'm a decent way there, I've just got some nasty issues with the depth buffer. Can't wait to post some pretty pictures of it!

The animation system is almost complete for it's basic implementation. Right now, you can drag-and drop new meshes, textures, and animations to build a new asset. Now I need to implement a basic UI and asset reference counting.