Essential Math Weblog Thoughts on math for games

3/11/2012

More Errata

Filed under: Erratical,Mathematical — Jim @ 5:02 pm

So Jacob Shields, a student at Ohio State, added a set of errata to the comments on another post. To address them, I’ve pulled them out one by one:

I came across this on page 117 of the Second Edition: “There is a single solution only if the rank of A is equal to the minimum of the number of rows or columns of A.”

If I’m interpreting this correctly, it follows that a 2×3 coefficient matrix with a rank of 2 has a single solution. In reality, it has 1 free variable and thus infinitely many solutions. Shouldn’t there be a single solution only if the rank of A is equal to the number of unknowns, i.e. columns?

Yeah, I was probably only thinking of square matrices when I wrote that. I’ll have to think of another way to put that. Saying that it’s equal to the number of columns doesn’t work if you’re solving xA = b.

Also, on page 126 it says: “The first part of the determinant sum is u_[0,0]*U~_[0,0].” Below, it displays this equation: “det(U) = u_[0,0]*U~_[0,0]”

Assuming I’m interpreting this correctly, this must be wrong. The determinant of a real matrix should be a real number, not another matrix. In both cases I think it means to say u_[0,0]*det(U~_[0,0]) — right?

Yes, that’s correct, it’s a typo. The notation should follow from that used on page 123.

[…] on page 130 it says: “It turns out the columns of R are the eigenvectors of A, and the diagonal elements of D are the corresponding eigenvectors.” Shouldn’t it be that “the diagonal elements of D are the corresponding eigenvalues”?

Yup, another typo.

Could you please explain what is meant by the first matrix multiplication shown on page 138 in Section 4.2.3, Formal Representation [of affine transformations]?

For example, where did the matrix come from? Its entries are vectors, and the dimension of the product is (m+1)x1. What is the significance of the product, and how does it apply to what was just said?

I understand the matrix multiplication below it. In this case, the dimension of the product is 1×1 and, when expanded out, is equal to T(P). On initial reading it seems like the first matrix multiplication and the second one are supposed to be equivalent (before the second it says, “We can pull out the frame terms to get…”), but they’re clearly different, so how did you go from one to the other?

I’m not sure what the issue is; it’s similar to the linear transformation discussion on 104-106. The first n columns of the first matrix are the T(v_j) terms and the last is the T(O_b) term. The product of the two matrices is the transformed result T(P). The last sentence on 138 sums it all up.

On page 180, the description in the first whole paragraph about rotating an object to demonstrate gimbal lock says to “rotate the object clockwise 90 degrees around an axis pointing forward … [then] rotate the new top of the object away from you by 90 degrees … [then] rotate the object counterclockwise 90 degrees around an axis point up …. The result is the same as pitching the object downward 90 degrees (see Figure 5.4).”

However, if I’m interpreting this correctly, the rotation described appears to be a fixed angle x-y-z (-pi/2,-pi/2,pi/2) rotation. The result of this is the same as pitching the object _upward_ 90 degrees. Furthermore, Figure 5.4 demonstrates a different rotation: a fixed angle x-y-z (pi/2,pi/2,pi/2) rotation.

If the x-axis is pointing away from you and you rotate clockwise (from your perspective) by 90 degrees, that’s the same as rotating around the x-axis by positive 90 degrees, using the right-hand rule. Similarly, rotating the top of the object away from you is rotating around the y-axis by positive 90 degrees. The result should be the same as the diagram.

And this is why math uses symbols rather than trying to explain it using natural languages…

I might be missing something here, but on pages 181-182 it says this: “Near-parallel vectors may cause us some problems either because the dot product is near 0, or normalizing the cross product ends up dividing by a near-zero value.”

For near-parallel vectors, wouldn’t the dot product be near +/- 1, not 0?

I’m not sure what I meant there. Possibly that the dot product magnitude being near 1 (not 0, as you point out) can, through floating point error, lead to a result with magnitude slightly greater than 1, which is not valid input to acos.

On page 178, under the heading “5.3.3 Concatenation” it says this: “Applying (pi/2, pi/2, pi/2) twice doesn’t end up at the same orientation as (pi, pi, pi).”

However, for fixed angle z-y-x (the most-recently mentioned convention in the book prior to this section), applying (pi/2, pi/2, pi/2) twice does appear to end up at the same orientation as (pi, pi, pi). This isn’t the case for fixed angle x-y-z, though (and presumably other conventions). That had me confused for a while!

I believe that should be: Applying (pi/4,pi/4,pi/4) twice doesn’t end up at (pi/2,pi/2,pi/2). Your confusion is understandable. Though as (very) weak defense, when I multiply it out on the computer, floating point error does give me slightly different results (some of the “zero” terms end up being very small f.p. values).

On page 191, about halfway down the page, it mentions vectors w_0 and w_1, and then says that if we want to find the vector that lies halfway between them, we need to compute (w_1+w_2)/2. I’m assuming that it really means (w_0+w_1)/2?

As an aside, what is the significance of the division by 2? That is, using this method, what is the expected length of the resultant vector (without normalizing)? For instance, let w_0 = (1,0,0) and w_1 = (0,1,0). Then the halfway vector is (1+0,0+1,0+0)/2 = (0.5,0.5,0), whose length is 1/sqrt(2). Clearly this points in the correct direction, but its length is not the length of w_0, the length of w_1, or the average length of them, so what is it used for?

Correct, it should be w_0 and w_1. And the division by two is just to represent the average. As I point out, you don’t need to divide by 2 if you want a normalized result — you can skip that and do the normalize step. The length is never used.

Equation 5.7 on page 191 has a hat over the q (i.e. unit quaternion notation). However, I don’t believe that it’s actually a unit quaternion–specifically, I think its length is something like 2*sqrt(2*cos(theta) + 2).

Yes, that’s a typo. No hat.

On page 191, when talking about converting from a rotation matrix to a quaternion, it says that “if the trace of the matrix is less than zero, then this will not work.” I was wondering if you could explain why?

While trying to figure out, I found another way of performing this conversion which may or may not be equivalent to your own method:

http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm

In that case, it involves taking sqrt(Trace+1), so it actually fails if Trace+1<0. What is the difference between that method and your method which makes the difference between Trace<0 and Trace+1<0? Is it related to the fact that if the rotation was in a 4×4 affine transformation matrix then you’d get +1 when you take the trace?

The trace+derived axis+normalization technique will fail as theta gets closer to pi, because r approaches a zero vector as theta approaches pi (see page 183). In this case, you’d end up with something close to (-1,0,0,0), which is the inverse of the identity quaternion and probably not the correct result. So if the trace is close to -1, you need to do more work.

So why check against 0? In an abstract math world with infinite precision, doing this with a trace between -1 and 0 should be fine. But with floating point your derived rotation axis vector effectively becomes noise as it gets smaller and smaller. It avoids a lot of floating point headaches just to do the zero check.

The website also does a zero check; it’s basically equivalent to what I have but the normalization is handled by the 0.5f/sqrt(trace+1.0f) term (modified slightly for w). Theirs is probably faster as it doesn’t require squaring and summing the components to do the normalization.

Updates and 2012 Tutorial Followup

Filed under: General,Tutorial — Jim @ 11:04 am

So it’s been a while since I checked comments — it appears that email forwarding got broken at some point — and I see my last post had a large number of comments regarding errata. I believe the commenter also sent me a tweet about them as well, so no real excuses there. And it probably didn’t help that the site went down for a couple weeks. In any case, I’ll take a look today and address those.

As far as this year’s tutorials, I believe it was a success. There was a lot of interest after each presentation, and while it’s hard to judge based on just that (a bit of selection bias there), most people seemed to like what they heard. I will, this week, be putting up the latest slides for this year, as well as the missing ones from previous years.

One thing I’d like to do, now that I have some free time, is post some more on this blog. My presentation this year was a revamp of an orientation representation/quaternion talk I’ve given in past years, with a new section on 2D rotations. So I’m thinking I might do a series of posts expanding on complex numbers as a method for 2D rotation. I’m thinking it might be possible to store the cosine of the half angle and do some interesting things with that, as the sine of the half angle is always implied (when using half angles, you can stay within the upper semicircle and assume the sine is always positive sqrt(1 – cos^2(theta/2)). We’ll see how that goes.

An alternative is a series on the math behind Robin Green’s talk this year — though to keep off of his toes it might be a series on the math you need to know in order to understand Robin Green’s talk.

Powered by WordPress