Essential Math Weblog Thoughts on math for games

9/6/2015

Third Edition Announcement, Plus Demo Code Notes

Filed under: General — Jim @ 11:11 am

It occurs to me that while I’ve been plugging this on Twitter, I haven’t mentioned it here: Essential Mathematics for Games and Interactive Applications has just come out with a third edition, kindly published by our friends at AK Peters and CRC Press. The book has been brought up-to-date and certain chapters have been revised to flow better. The most significant change is probably the lighting chapter, which still uses a very simple lighting model but builds it out of a physically-based lighting approach. There are also updates in the discussions of floating point formats, shaders, color formats and random number generators, plus much more! Please take a look, I think you’ll find it worthwhile.

Another piece of big news is that the code with the book has been updated as well. The previous edition used OpenGL 2.0 and D3D 9 — now it uses OpenGL 3.2 Core Profile, and D3D 11. Anything that depended on the old fixed-function pipeline (such as the old OpenGL varyings and uniforms) has been updated. And the book no longer comes with a CD. All the code is freely (both in terms of beer and of speech) available at GitHub. This should be good news for those of you who purchased the electronic edition; other than the platform updates it should be compatible with the second edition.

Finally, I’d like to include a few notes on the demo code. Some of the code was written after the text for the book was largely complete, so some of the minor challenges and decisions in writing a cross-platform renderer aren’t really covered, or aren’t covered in much detail.
(more…)

3/29/2015

Rendering Signed Distance Fields, Part 3

Filed under: Mathematical,Tutorial — Jim @ 1:59 pm

So Part 1 and Part 2 laid the groundwork for rendering signed distance fields. Now I’ll present a general shader for doing so. This is a reimplementation that retraces the work originally presented by Qin, et al (basically, I missed that paper during my original research).
(more…)

3/19/2015

Rendering Signed Distance Fields, Part 2

Filed under: Mathematical,Tutorial — Jim @ 9:32 pm

So in Part 1 of this series, I talked about how to create a shader to render a distance field texture that only has uniform scale, rotation or translation applied — no shear or perspective transformations. So now one might expect (as I promised) that I’d talk about how to create a general shader for distance fields. After looking at what I wrote, however, I realized that a) I need to explain why you might want to use a signed distance field texture and b) to explain the rest I need to bring everyone up to speed on multivariable derivatives — namely gradients and the Jacobian. So I’ll cover those in this post, and finish up with the full answer next time. That said, I did fix the bug in Skia, so if you know all this or are impatient, you can skip ahead and look at the source code.
(more…)

3/15/2015

Rendering Signed Distance Fields, Part 1

Filed under: Mathematical,Tutorial — Jim @ 9:50 pm

So at the end of my GDC 2015 presentation “How to Draw an Anti-Aliased Ellipse” I mentioned that you could extend the same techniques to signed distance fields (SDFs), in particular for text. However, some details might be helpful. Because SDFs encode distance directly, the approach is slightly different than for an implicit field like ellipses. I also made a grandiose statement that you should use the shaders in Skia because all the others don’t work properly. After prepping this post, I realized that was half right — the case that we see 99% of the time in Chrome is correct, but the case that you might see in a 3D world — e.g., a decal on a wall — is not. I’ll talk about a better way to handle that in Part Two.

So first, go look at the presentation linked above, just as a quick review. I’ll wait.
(more…)

10/21/2013

New Thoughts on Random Numbers, Part Two

Filed under: Erratical,Mathematical — Jim @ 10:32 pm

Sorry about the long delay on this. I originally intended to post this right away, but then realized I should do some further testing… and for whatever reason that never happened until last week. In any case, as I mentioned in my previous post, I recently added a new pseudo-random number generator to the Skia test suite. The last post was about verifying that a new PRNG was needed, this one is about what I used to replace it.

My initial thoughts about how to replace the Linear Congruential Generator was to jump right to the Mersenne Twister. However, the Mersenne Twister requires a buffer of 624 integers. While this is not terrible, it’s probably overkill for what we need, and the large buffer is not great for low-memory situations. In addition, part of my research showed that it’s possible to get large random periods with other, simpler methods.

To cut the story short, because we do a lot of testing on mobile platforms I wanted something low-memory and super simple. I ended up taking one of Marsaglia’s generators and modifying it slightly. This is the “mother-of-all” generator that I mention in the random number generator chapter:

k=30345*(k&65535)+(k>>16);
j=18000*(j&65535)+(j>>16);
return((k << 16)+j);

where k and j are unsigned 32-bit integers, initialized to non-zero values.

When I ran this through the Tuftests suite, it passed the GCD and Birthday Spacings tests, but failed the Gorilla test on half of the bits. So I figured, what the hey, I’ll just combine the high bits of k into the low bits of the result as well, or

k=30345*(k&65535)+(k>>16);
j=18000*(j&65535)+(j>>16);
return(((k << 16) | (k >> 16)) + j);

Surprisingly, this passed all three tests. In addition, I ran a test to try to see what its period is. It ran over a weekend and still didn’t hit 2^64 values. While a good part of that time was running the test on each new value, it does show that in a game or your average application, we probably don’t need these large periods because practically we’ll never generate that many values (the exception might be a game server, which is up for days). For Skia’s particular bench and unit tests, we might generate a few thousand random values — as long as our period is more than that we should be okay.

I’ve since run the TestU01 suite on this, and it passes SmallCrush, fails one of the Birthday Spacing tests (#11) on Crush, and two other Birthday Spacing tests (#13 and #15) on BigCrush. Other than that it passes. So while it’s not as random as other generators, at this point it doesn’t seem worth it to change it and rebaseline all of our results again, just for those failures.

If that doesn’t work for you, an alternative worth trying might be Marsaglia’s KISS generator, which takes four different random number generators and mixes their bits together. He claims (or claimed: Dr. Marsaglia unfortunately passed on in 2011) that the 2011 version has a period of greater than 10^40000000. Regardless of quality, the 2011 version requires a large buffer which is probably not practical for low-memory mobile platforms. The 1993 version, on the other hand, requires only six values, and has a period of 2^123. It does still fail one Crush and one BigCrush test, but is still probably good enough for games.

One important note: KISS is not cryptologically safe, as proven here, and I strongly doubt the generator I describe above is either. And if you do need a cryptologically secure generator, it’s best to find an approved implementation and use that, as even slight changes in speed due to a naive implementation can be enough to give an attacker enough information to break your code. But of course, if the NSA was involved, you might want to be wary…

In any case, to get results suitable for game logic, you really don’t need Mersenne Twister. Using one of these simple generators will probably do, and are leaps and bounds better than any Linear Congruential Generator.

4/8/2013

New Thoughts on Random Numbers, Part One

Filed under: Erratical,Mathematical — Jim @ 9:00 am

I recently (December 2012) switched jobs from being an Engine Programmer at Insomniac Games, to being a Software Engineer at Google. At Google, I’m now working on Skia, which is the 2D graphics library that underlies Chrome, Firefox and parts of Android. One of the first things that came to my attention when I arrived was the pseudorandom number generator (PRNG) used to generate test cases for regressions — namely, that there was some concern that it wasn’t very random. In particular, the least significant bit flips back and forth, and possibly other lower-order bits weren’t very random. “Aha,” I thought, “Here’s something small and isolated that I can take on as a new hire. And it won’t take long; I can just draw on the random number chapter from our book.” Turns out that was partially right.

The first step was determining how good or bad the original PRNG was. Looking at it, it was clear that it was a linear congruential generator (LCG), which do have notoriously poor randomness in lower-order bits. They also fail the spectral test, i.e. if you generate points in space they tend to fall along visible planes. Not so good for generating tests for a graphics library. But how to verify this?

In the book I discuss some batteries of tests, such as Marsaglia’s Diehard and Brown’s DieHarder. In doing some research I found a couple more. First, there’s TestU01, which is a more extensive suite released in 2009. And Marsaglia and Tsang have created a smaller set of tests called tuftests, which they claim are nearly as rigorous as the large suites — i.e. if a PRNG passes tuftests, it’s highly likely to pass Diehard. For our case, we aren’t trying to find the ultimate PRNG, just something fast and reasonably random. So tuftests should suffice.

Tuftests consists of only three tests: the GCD test (as opposed to the GDC test, which probably involves giving a coherent talk after a night of parties), the Gorilla test, and the Birthday Spacings test. The GCD computes a large series of two pseudorandom variables, finds their greatest common denominators, and then compares the distribution of cycles necessary to compute the GCDs to the cycles needed for truly random variables. The Gorilla test is a variant of the Monkey test, where random strings of bits are generated using a single bit position of a random variable. This is done multiple times. The count of strings not generated should fall in a normal distribution. The Birthday Spacings test generates a set of birthdays in a “year”, and then determines the spacings between them, which should fall in a Poisson distribution. For all of these, the generated distribution is compared with the expected distribution, using chi square, Anderson-Darling or other method.

Testing the LCD used by Skia, it was clear that it failed the Gorilla test for all but the highest order bits (it may have also failed the Birthday test, I don’t recall now). So the belief that Skia’s PRNG needed improvement was verified. The next step was to find a decent replacement that doesn’t require a lot of overhead (important for testing on mobile devices). And I’ll cover my search for that next time.

4/1/2013

GDC 2013

Filed under: Tutorial — Jim @ 7:46 pm

So GDC 2013 has come and gone, and with it another tutorial session. This year I did not run the Physics Tutorial, passing that on to Erin Catto’s more than capable hands. Instead, I organized the Math Tutorial for the first day, with the following lineup:

  • Interpolation and Splines – Squirrel Eiserloh (The Guildhall at SMU)
  • Matrix Transformations – Squirrel Eiserloh (The Guildhall at SMU)
  • Understanding Quaternions – Jim Van Verth (Google)
  • Dual Numbers – Gino van den Bergen (Dtecta)
  • Orthogonal Matching Pursuit and K-SVD for Sparse Encoding – Robin Green (Microsoft) and Manny Ko (Imaginations Technologies)
  • Computational Geometry – Graham Rhodes (Applied Research Associates Inc.)
  • Interaction with 3D Geometry – Stan Melax (Intel)

Overall, I thought it was a good mix of beginning and advanced topics. So a big thank you to all the speakers — I learn something from them every year, and without them it wouldn’t be possible. And as time goes on, I’ll add links to the slides as they come in.

Before I close, I have one comment on my talk. As I was discussing the matrix form of the quaternion, I mentioned that multiplying by a rotation on the left is the same as multiplying by the inverse of the rotation on the right. As I think I made clear later, this is not true. I was trying to convey how I originally — and incorrectly — thought about it but I fear I may have misled some in the audience. What was thinking was that if you multiply a column vector on the right by a rotation matrix, i.e. Rv, this is the same as multiplying a row vector on the left by the inverse, i.e. vTR-1. But of course, we’re not multiplying vectors, we’re multiplying a matrix Q which represents a quaternion, which in turn represents a vector. So, not quite the same thing, and the end the result is not the same. I’ve modified the notes in the slides to make this clearer.

5/6/2012

ECGC 2012

Filed under: General,Mathematical — Jim @ 6:55 pm

I’ve spoken at the East Coast Game Conference in the past, mostly doing a version of our physics tutorial (cut down to 1 hour, if you can believe it). This year I wanted to do something different and talk about something new, in this case the work I did to get stereo up and running in Ratchet & Clank: All 4 One. Attendance was a little lower than I expected, but it was still a good audience. I have to admit perhaps the title of the talk might have scared other people off: Hybrid Stereoscopy in Ratchet & Clank: All 4 One. Maybe the description will be less forboding:

This talk will cover Insomniac’s hybrid 3D stereo technique, using a combination of reprojection for opaque objects and standard stereo rendering for transparent objects. This gives something close to the speed of reprojection with fewer artifacts. The talk will cover the process followed to create the technique, equations used to match the reprojected and standard data, and a list of mistakes to avoid for anyone interested in stereo.

I do cover some math that a lot of other stereo talks brush over — so if you’re interested in stereo please click on the link and check it out.

3/11/2012

More Errata

Filed under: Erratical,Mathematical — Jim @ 5:02 pm

So Jacob Shields, a student at Ohio State, added a set of errata to the comments on another post. To address them, I’ve pulled them out one by one:

I came across this on page 117 of the Second Edition: “There is a single solution only if the rank of A is equal to the minimum of the number of rows or columns of A.”

If I’m interpreting this correctly, it follows that a 2×3 coefficient matrix with a rank of 2 has a single solution. In reality, it has 1 free variable and thus infinitely many solutions. Shouldn’t there be a single solution only if the rank of A is equal to the number of unknowns, i.e. columns?

Yeah, I was probably only thinking of square matrices when I wrote that. I’ll have to think of another way to put that. Saying that it’s equal to the number of columns doesn’t work if you’re solving xA = b.

Also, on page 126 it says: “The first part of the determinant sum is u_[0,0]*U~_[0,0].” Below, it displays this equation: “det(U) = u_[0,0]*U~_[0,0]”

Assuming I’m interpreting this correctly, this must be wrong. The determinant of a real matrix should be a real number, not another matrix. In both cases I think it means to say u_[0,0]*det(U~_[0,0]) — right?

Yes, that’s correct, it’s a typo. The notation should follow from that used on page 123.

[…] on page 130 it says: “It turns out the columns of R are the eigenvectors of A, and the diagonal elements of D are the corresponding eigenvectors.” Shouldn’t it be that “the diagonal elements of D are the corresponding eigenvalues”?

Yup, another typo.

Could you please explain what is meant by the first matrix multiplication shown on page 138 in Section 4.2.3, Formal Representation [of affine transformations]?

For example, where did the matrix come from? Its entries are vectors, and the dimension of the product is (m+1)x1. What is the significance of the product, and how does it apply to what was just said?

I understand the matrix multiplication below it. In this case, the dimension of the product is 1×1 and, when expanded out, is equal to T(P). On initial reading it seems like the first matrix multiplication and the second one are supposed to be equivalent (before the second it says, “We can pull out the frame terms to get…”), but they’re clearly different, so how did you go from one to the other?

I’m not sure what the issue is; it’s similar to the linear transformation discussion on 104-106. The first n columns of the first matrix are the T(v_j) terms and the last is the T(O_b) term. The product of the two matrices is the transformed result T(P). The last sentence on 138 sums it all up.

On page 180, the description in the first whole paragraph about rotating an object to demonstrate gimbal lock says to “rotate the object clockwise 90 degrees around an axis pointing forward … [then] rotate the new top of the object away from you by 90 degrees … [then] rotate the object counterclockwise 90 degrees around an axis point up …. The result is the same as pitching the object downward 90 degrees (see Figure 5.4).”

However, if I’m interpreting this correctly, the rotation described appears to be a fixed angle x-y-z (-pi/2,-pi/2,pi/2) rotation. The result of this is the same as pitching the object _upward_ 90 degrees. Furthermore, Figure 5.4 demonstrates a different rotation: a fixed angle x-y-z (pi/2,pi/2,pi/2) rotation.

If the x-axis is pointing away from you and you rotate clockwise (from your perspective) by 90 degrees, that’s the same as rotating around the x-axis by positive 90 degrees, using the right-hand rule. Similarly, rotating the top of the object away from you is rotating around the y-axis by positive 90 degrees. The result should be the same as the diagram.

And this is why math uses symbols rather than trying to explain it using natural languages…

I might be missing something here, but on pages 181-182 it says this: “Near-parallel vectors may cause us some problems either because the dot product is near 0, or normalizing the cross product ends up dividing by a near-zero value.”

For near-parallel vectors, wouldn’t the dot product be near +/- 1, not 0?

I’m not sure what I meant there. Possibly that the dot product magnitude being near 1 (not 0, as you point out) can, through floating point error, lead to a result with magnitude slightly greater than 1, which is not valid input to acos.

On page 178, under the heading “5.3.3 Concatenation” it says this: “Applying (pi/2, pi/2, pi/2) twice doesn’t end up at the same orientation as (pi, pi, pi).”

However, for fixed angle z-y-x (the most-recently mentioned convention in the book prior to this section), applying (pi/2, pi/2, pi/2) twice does appear to end up at the same orientation as (pi, pi, pi). This isn’t the case for fixed angle x-y-z, though (and presumably other conventions). That had me confused for a while!

I believe that should be: Applying (pi/4,pi/4,pi/4) twice doesn’t end up at (pi/2,pi/2,pi/2). Your confusion is understandable. Though as (very) weak defense, when I multiply it out on the computer, floating point error does give me slightly different results (some of the “zero” terms end up being very small f.p. values).

On page 191, about halfway down the page, it mentions vectors w_0 and w_1, and then says that if we want to find the vector that lies halfway between them, we need to compute (w_1+w_2)/2. I’m assuming that it really means (w_0+w_1)/2?

As an aside, what is the significance of the division by 2? That is, using this method, what is the expected length of the resultant vector (without normalizing)? For instance, let w_0 = (1,0,0) and w_1 = (0,1,0). Then the halfway vector is (1+0,0+1,0+0)/2 = (0.5,0.5,0), whose length is 1/sqrt(2). Clearly this points in the correct direction, but its length is not the length of w_0, the length of w_1, or the average length of them, so what is it used for?

Correct, it should be w_0 and w_1. And the division by two is just to represent the average. As I point out, you don’t need to divide by 2 if you want a normalized result — you can skip that and do the normalize step. The length is never used.

Equation 5.7 on page 191 has a hat over the q (i.e. unit quaternion notation). However, I don’t believe that it’s actually a unit quaternion–specifically, I think its length is something like 2*sqrt(2*cos(theta) + 2).

Yes, that’s a typo. No hat.

On page 191, when talking about converting from a rotation matrix to a quaternion, it says that “if the trace of the matrix is less than zero, then this will not work.” I was wondering if you could explain why?

While trying to figure out, I found another way of performing this conversion which may or may not be equivalent to your own method:

http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm

In that case, it involves taking sqrt(Trace+1), so it actually fails if Trace+1<0. What is the difference between that method and your method which makes the difference between Trace<0 and Trace+1<0? Is it related to the fact that if the rotation was in a 4×4 affine transformation matrix then you’d get +1 when you take the trace?

The trace+derived axis+normalization technique will fail as theta gets closer to pi, because r approaches a zero vector as theta approaches pi (see page 183). In this case, you’d end up with something close to (-1,0,0,0), which is the inverse of the identity quaternion and probably not the correct result. So if the trace is close to -1, you need to do more work.

So why check against 0? In an abstract math world with infinite precision, doing this with a trace between -1 and 0 should be fine. But with floating point your derived rotation axis vector effectively becomes noise as it gets smaller and smaller. It avoids a lot of floating point headaches just to do the zero check.

The website also does a zero check; it’s basically equivalent to what I have but the normalization is handled by the 0.5f/sqrt(trace+1.0f) term (modified slightly for w). Theirs is probably faster as it doesn’t require squaring and summing the components to do the normalization.

Updates and 2012 Tutorial Followup

Filed under: General,Tutorial — Jim @ 11:04 am

So it’s been a while since I checked comments — it appears that email forwarding got broken at some point — and I see my last post had a large number of comments regarding errata. I believe the commenter also sent me a tweet about them as well, so no real excuses there. And it probably didn’t help that the site went down for a couple weeks. In any case, I’ll take a look today and address those.

As far as this year’s tutorials, I believe it was a success. There was a lot of interest after each presentation, and while it’s hard to judge based on just that (a bit of selection bias there), most people seemed to like what they heard. I will, this week, be putting up the latest slides for this year, as well as the missing ones from previous years.

One thing I’d like to do, now that I have some free time, is post some more on this blog. My presentation this year was a revamp of an orientation representation/quaternion talk I’ve given in past years, with a new section on 2D rotations. So I’m thinking I might do a series of posts expanding on complex numbers as a method for 2D rotation. I’m thinking it might be possible to store the cosine of the half angle and do some interesting things with that, as the sine of the half angle is always implied (when using half angles, you can stay within the upper semicircle and assume the sine is always positive sqrt(1 – cos^2(theta/2)). We’ll see how that goes.

An alternative is a series on the math behind Robin Green’s talk this year — though to keep off of his toes it might be a series on the math you need to know in order to understand Robin Green’s talk.

5/15/2011

Collision Response errata

Filed under: Erratical — Jim @ 8:38 pm

On page 636 of the second edition, the second paragraph begins as follows:

Each object will have its own value of epsilon. […]

On page 637, equation 13.18 contains a term epsilon_a, and below it is written

The equation for j_b is similar, except that we substitute epsilon_b for epsilon_a.

The code on page 638 follows this by referring to m_Elasticity and other->m_Elasticity.

These assertions are incorrect. The coefficient of restitution is not a property of an individual object — it is a property of the collision itself. This can be seen clearly in equation 13.15, which applies to both objects. So if we are considering four materials A, B, C, and D, there would be six possible coefficients of restitution: A colliding with B, A colliding with C, A colliding with D, B colliding with C, B colliding with D, and C colliding with D.

Many thanks to James McGovern of the Art Institute of Vancouver for pointing this out.

An error that was corrected in the second edition, but is not noted in the errata for the first is in the equation for the impulse magnitude for rotational collision (Equation 12.24, page 618). The sum of cross products in the denominator should be dotted with the normalized collision normal.

And in both editions, the moments of inertia tensors used in the collision response equations (on pages 617-618 in the first edition, and page 639 in the second edition) are referred to as I_a and I_b. These should be J_a and J_b to match the previous notation used. The intent of this was to separate the notation for the identity matrix from the notation for the inertia tensor. However, standard convention is to use I for the inertia tensor, hence the typo.

Thanks for Johnny Newman, also of Vancouver, for discovering the latter two errors.

Apologies for any confusion any of this may have caused readers.

2/23/2011

2011 GDC Tutorial Lineup

Filed under: Tutorial — Jim @ 10:06 pm

Here’s the lineup for this year’s Physics for Programmers tutorial. Times are approximate, but should be pretty close:

10:00-10:10 – Introductions
10:10-11:00 – Squirrel Eiserloh – It’s All Relative
11:00-11:15 – BREAK
11:15-12:00 – Erwin Coumans – Game Physics Artifacts
12:00-12:30 – Jim Van Verth- Numerical Integration
12:30-14:00 – LUNCH
14:00-14:50 – Gino van den Bergen- Collision Resolution
14:50-15:25 – Takahiro Harada – GPU Physics
15:25-16:00 – Glenn Fiedler – Networked Physics
16:00-16:15 – BREAK
16:15-17:10 – Erin Catto – Soft Constraints
17:10-18:00 – Kees van Kooten- Fluids

I’m definitely looking forward to it, should be a great day of physics talks. Hope to see you Tuesday, March 1, in Room 3007, West Hall Moscone Center.

2/21/2011

A Pair of Pertinent Podcasts

Filed under: Mathematical — Jim @ 12:00 pm

These have been in the queue for a while, but the BBC podcast In Our Time recently (as in the past six months) covered two more mathematical topics related to the book. The first is on http://www.bbc.co.uk/programmes/b00tt6b2, which is useful if you’re interested in having a deeper understanding of quaternions. The second is on random and pseudorandom numbers, which parallels one of the chapters in the second edition. I highly recommend the podcast in general, as well. Each show is about 45 minutes long, and features a panel of three experts on each topic moderated by Melvin Bragg.

12/28/2010

Two CD Updates

Filed under: Erratical — Jim @ 9:36 pm

Yes, it’s been a while. But here are two updates for those who make use of the CD that comes with the book.

First, Ben Bass over at Coded Structure wrote to let me know that the code on CD doesn’t build properly with Mac OS 10.6. Fortunately he has provided a guide on his blog showing how to patch up the Makefiles. Thank you, Ben!

Secondly, another reader wrote to point out that the CD notes say that built Windows versions of the OpenGL examples would be provided, but in fact they aren’t on the CD. These are now available in a separate zip file which you can download here.

8/6/2009

Mike Acton Takes on C++ Math Libraries

Filed under: General,Mathematical — Jim @ 11:57 am

A couple weeks ago, Mike Acton, head of Core Technologies at Insomniac (where I’m now working, btw) posted this reaming of some standard C++ math code. I think it’s an excellent read if you’re interested in deeply optimizing your math code, in particular for consoles, and in particular for the PS3.

Of course, posting this at all is a bit uncomfortable from where I’m sitting.  I think it’s safe to say that if you were to compare our library code to what he is, er, analyzing, you’d find some significant similarities. However, our libraries were mainly designed to teach mathematical concepts rather than optimization concepts. As such, you end up with classes that represent mathematical entities, e.g. Planes, Spheres, Lines, etc. and simple code that emphasizes clarity. That said — as Mike’s slides aptly demonstrate — if you’re trying to squeeze every cycle out of the machine there are other ways to organize your data and code, and in fairness we didn’t address that much at all.  In retrospect this was an unfortunate oversight. I’d say more emphasis on optimization would be a good addition to the Third Edition, but I’m not sure what my family would think of another book…

6/10/2008

Indexed

Filed under: Mathematical — Jim @ 10:17 am

As part of the book review (see previous article), they have images from the Indexed weblog.   If you like graphical math jokes, you might find them amusing.

Probability Resources

Filed under: Mathematical — Jim @ 10:05 am

For those who are not aware, there is a new chapter in the book about Random Numbers.  Because it is only a 50 page chapter in a much larger work, I don’t go into as much detail as I would have liked about probability.  So I recommend a couple of longer works dedicated to the subject.  One is Grinstead and Snell’s Introduction to Probability, which I had the honor of being taught from (in a pre-release edition) by John Kemeny.  The other is Larry Gonick’s Cartoon Guide to Statistics, which is a surprisingly informative little book.  However, after seeing a review of The Drunkard’s Walk: How Randomness Rules Our Lives, by Leonard Mlodinow, I may have a third.  I recommend at least reading the review, and possibly checking out the book.

And if you’re into podcasts (and who isn’t, really?  But I jest), the BBC’s In Our Time podcast has a episode about Probability.  I haven’t had a chance to listen to it yet, but if it’s like their other programmes, I’m sure it will be quite interesting and informative.

4/17/2008

It’s Done

Filed under: General — Jim @ 9:57 pm

After months of hard work, we’re finally done with the Second Edition.  It went to the presses last week, and it will be available May 16th.  Pre-order it now.

2/8/2008

Manifests and FAT32

Filed under: General — Jim @ 9:31 am

The following drove me crazy for the past few weeks, so I figured I’d better post about it just to get the word out.

For those who don’t know, VS 2005 added the notion of a “manifest” to manage DLLs. Every application must have a valid manifest, or you can’t run. In general, you can use the default settings, but if you’re running on a FAT32 partition (which I am, as I share it through BootCamp), for some reason it just doesn’t always create the manifest correctly. Beats me why since I’d think that sort of thing should be independent of what file system you have, but there you go. The solution is to check the “Use FAT32 work-around” option in the Manifest Tool section of the main project’s properties.

Now I can finally go through and update all of the examples to use our new rendering platform — it took a bit to put together, but we finally have a cross-platform renderer that supports both OpenGL and D3D9.

12/6/2007

Second Edition Update

Filed under: General — Jim @ 11:16 am

Sorry about the long delay between updates: things have been very busy here in Chez Jim.

Lars and I have been hard at work on the second edition, and are wrapping up the text this week to be shipped to the publisher. The major changes are:

  • Brand new chapter on random numbers
  • The three core rendering chapters have been completely rewritten for vertex and fragment shaders
  • Direct3D 9 and OpenGL 2 support added

In addition, all the remaining chapters have been revised for clarity, corrections and updated content. Skinning and morphing unfortunately didn’t make it into the main text because of space, but may end up on the CD or may be posted on this site at some point. Availability of this edition will be sometime next year.

A few bits of other news:

  • Lars and I are both at NVIDIA Corporation now. Lars is working on mobile technology, and I’m working on OpenGL drivers.
  • I should be at GDC in 2008, although that may be up in the air.

That’s about it for now. I’ll update as the process continues to let you know how it’s going.

Older Posts »

Powered by WordPress