Thursday, January 29, 2009

Animation and Bounding Sheres

X-Plane uses transform animation - that is, 3-d models are animated by applying rotations and translations to parts of their geometry.  This makes calculating the bounding sphere tricky.

In attempting to find the smallest static bounding sphere that accounted for all animation, I first tried transforming the bounding sphere by the animation.  Translations would move the sphere half-way to the extrema of the translation and increase the radius by half the translation distance. Rotations around the origin would move the sphere to the origin and increase the sphere size by its original distance from the origin.  (This is like convolving the sphere around a sphere.)

This didn't work very well - consider a rotating beacon on top of a tall pole.  This object has only one animation - a rotate around the Y axis.  If we convolve around a sphere, we end up rotating the rotating beacon around the arm of the pole (e.g. its very tall height) when in fact it only rotates around the Y axis, which it is not far from at all.

So take two was to take the axis into account, convolving the sphere correctly around the axis. This helped a lot, but for some models the sphere was still really huge.

Then a modeler pointed the problem out to me: some authors will rotate an object a tiny bit around a very long arm to create a not-quite-linear translation.  Convolving around 360 degrees for this tiny rotation means picking up a ton of area that would never be in the object.

So take three is to explicitly convolve the rotated geometry along the real rotated arc at a few degrees spacing, with the spacing decided by whether we need to be faster or more accurate. This seems to work pretty well.

The most accurate thing I can image would be to actually convolve all of the points to create a finalized point cloud and then take the bounding sphere.  We get false area by convolving a larger volume than we need to.  The problem with the point cloud approach is that the number of points increases exponentially as we nest animations - to truly get it right, assuming 36 convolve steps, each point is multiplied up to 36 times for each rotation.

Assuming we could afford that cost, to be even more correct, we would want to convolve taking into account aligned animation variables - that is, if two animations happen in lock-step, we should not multiply out all possibilities because they cannot animate separately.

But at some point, the bounding sphere is a heuristic, and in practice the three refinements above make a huge improvement and don't leave a lot of slop on the table.

Saturday, January 24, 2009

pow(0,0) = NaN ... Sometimes

Thanks to the folks on the Mac OpenGL lists who helped me figure this out.
pow(0,0) is undefined in the GLSL spec.
On ATI hardware, it seems to return 0.
On NV hardware, it seems to return 0 if the inputs are known at compile time but NaN if they are only available at runtime.
Now it's my fault for ever sending pow(0,0) to GLSL in the first place!  But what threw me was not just that the artifacts (propagating a NaN through your fragment shader generally results in black) were NV specific, but that when I put in test code to identify the problem, they went away. This was because the test code was constant and thus the compile-time pow behavior kicked in.

Randomized Bounding Spheres

X-Plane uses bounding spheres to cull our meshes.  So having the smallest possible bounding spheres matters - the quality of the bounding sphere calculator affects all culling, which affects all drawing.

But calculating a minimum bounding sphere for a point cloud is not a fast operation - it's worse than linear time (which matters for huge meshes that are loaded while the simulator is running) and algorithms usually require robust mathematical operators.

Instead I use a bit of a heuristic - an incremental grow algorithm:
Initialize the sphere to size 0 and center of the first point.
For each point until there are none left.
If the point is inside the bounding sphere, ignore it.
Otherwise, grow the bounding sphere to just barely include this point.
This algorithm is pretty good but not optimal.  The reason that it's not optimal is that when we grow to encompass a new point P, the opposite point that is "growing" (and not moving) the sphere is the farthest point on the sphere from P.  This farthest point may not actually be an input point at all - it could just be an artifact of the fact that we are using the sphere instead of the original data to grow.

(A slower but more comprehensive algorithm would have us go back over the original point data to find the far-side point limits, giving us O(N^2) - plus we'd start to have floating point robustness issues.)

The quality of the bounding sphere has a lot to do with the order in which we grow it - the earliest points establish the size of a sphere, effectively inducing "phantom" points around the temporary (smaller) sphere.

So my first idea was to find the longest axis between any two points in my cloud and insert them first, in the hope that by establishing the long axis we could avoid growing the sphere in false directions.  This option probably means fewer grow operations (since more points will be inside the long axis) but the cost of finding the longest axis is O(N^2) so it's not really a speed win.

What surprised me is that using the longest axis vs. the native submit order coming out of the host program, the longest axis was better sometimes but worse in others.  The "natural" order of the points seemed to produce pretty good results a lot of the time too.

What I then tried was a randomized approach:
For each of N trials
Shuffle the data
Calculate the bounding sphere
If the sphere is smaller than our previous best
(or this is the first trial)
Save this as our best
Now the quality of these bounding spheres are random (and some are quite poor) but as we increase N, we are more and more likely to randomly find an order that is superior to any pre-determined heuristic.  The shuffle takes linear time, and the number of trials is constant (and quite possibly a lot smaller than the number of points).

For N = 256, this produces better results than either the natural or long-axis-based sphere perhaps 90% of the time.  And when it doesn't produce the best sphere, it is very close to optimal, usually within a few percent.

Wednesday, January 07, 2009

Webpages on the Iphone

I found info here on how to do some basic sanitization of a website for the iphone...you can put this into your header section:
<meta name="viewport" id="iphone-viewport" content="width=480,maximum-scale=0.66667"/>
For sites that use php, I use this:
function is_phone() {
return stristr($_SERVER['HTTP_USER_AGENT'],'iPhone');
}
This works on the ipod touch too.

Tuesday, January 06, 2009

Minkowksi Sums and Buffering

As usual, a concept which I struggled with for years is implemented cleanly and elegantly in CGAL.  I've spent more hours than I can think of fighting with buffering algorithms - the one I finally came up with is very similar in approach to the GEOS buffering algorithm.

Buffering is the process of making a polygon bigger or smaller - a trivial operation unless something ugly happens, like the polygon bumping into its new bloated self.

GEOS solves the problem the same way CGAL does: using the winding rule.  Basically, as long as your offset polygon segments always turn in the same direction as the original, you can count the number of edges you cross - each time you go from the right side of the edge to the left, you increase the winding count (for counter-clockwise polygons) - and vice versa.  If the winding count is positive, you're inside.

Why this works, I don't know - I couldn't prove it to you.  But it does seem to work, and it seems to produce robust results even with fairly smashed up offset polygons, which is important to me, because often my offsets are larger than the polygons themselves.

If I had to prove it, I'd probably try to demonstrate that certain overlapping cases of similarly wound polygons form "union" operations, thus the results are always adding or always subtracting (depending on which kind of buffer you use).

So could I have used CGAL?  Actually no -- I needed something that's not easy to find in the GIS world: a buffer that varies its width per segment.  I buffer polygons to remove the road's width from the area I use to place buildings - sometimes the road width changes mid-polygon!

Monday, January 05, 2009

Polygon Offset and Shadow Mapping

A little bit of polygon offset goes a long way in fixing shadow mapping "acne". In particular, polygon offset when creating the shadow map works better than simply adding a constant factor when doing the shadow compare. Why? Well, this paper explains it.

Basically the error you get in shadow maps that cause "acne" (that is, surfaces shadowing themselves, but only sometimes) come from two sources:
  • Numeric (e.g. you ran out of bits).
  • Geometric.
Now the numeric case is easy to understand: your depth compare is only as good as the "Z" precision of your shadow map - that is, 1 in 2^24 for a 24-bit integer shadow map. We need to add at least 1/2^24 to the shadow map to assure that objects don't shadow themselves.

But you'll find that that's not nearly enough offset. Why? Well, the geometric error is the answer.

When you do a shadow map compare, the depth you are reading comes from the depth at the center of the pixel from the shadow map. But because you are projecting the texture, the sample you want isn't the center of the pixel. Since the shadow map defines a surface perpendicular to the sun, as the distance from the shadow map pixel center to the point you wanted increases, you need more bias.

(Note that this means that the lower the horizontal/vertical resolution of your shadow map, the larger this depth-bias problem becomes. Also note that as the Z-buffer distance of your occluder increases - that is, as you shrink the near and far clip planes of the sun to "hug" the occluder, the depth slope per pixel increases and this problem becomes worse too! This is counter-intuitive - you would not expect to need a larger constant bias due to a more precise shadow map.)

And that's why polygon offset is useful: the the bias you need goes up as our occluder starts to slope away from the sun. That is the case where (when self shadowing) the occluder will slope away from the sun in your real model, but the shadow map will have a "perpendicular to sun" shadow over the span of a single pixel. The more sloped, the more bias - perfect for polygon offset.

The big win of polygon offset over a constant bias is that it lets you reduce the amount of bias when the occluder is perpendicular to the sun. This is the base where the geometric error is almost none, and a very small constant bias will be adequate.

As a final note, a number of web pages and discussion forums refer to the "non-linear" nature of the Z-buffer. For integer z buffers, this is only true when you use frustum projection! The non-linearness comes from the matrix transform, not the z-buffer itself; what you're seeing is Z precision get smaller for far away objects just like pixel size gets smaller for far away objects.

What this means is that while you're very likely to have problems with shadow maps based on the depth buffer (that is, post-perspective-divide Z) when the shadow map uses a frustum, it is not an issue at all for orthographic projection. In other words, there shouldn't be a problem with the sun, but there will be with spot lights.