My Bug, My Bad: Sunk by Float
I had a bug along these lines in a C# application that was pretty fun to troubleshoot, because it only occurred when the runtime gave a particular float extra precision (by letting it live in full-size x87 registers).
I was using an unmodified version of one of those standard 'triangulation by ear clipping' algorithms, transliterated directly into C#, using floats. In practice, it did exactly what I wanted and I wasn't using it much, so I never put much thought to it. Then it started going into an infinite loop in some cases. Troublesome.
So, I'd get it to go into an infinite loop, I'd attach the debugger, and... it'd leave the infinite loop. What?
Hm, identify the arguments that produce the infinite loop (good ol' printf debugging!) and reproduce those in an isolated test case. Great, test case enters an infinite loop, let's attach the debugger... hey, what? It's not failing anymore.
As it turns out, the default behavior for attaching a debugger to a .NET executable includes having the JIT deoptimize some of your method bodies in order to make breakpoints and local variables easier to work with (in their wisdom, MS did include an option to turn this off) - so any time I ran the test case under a debugger, it passed, because deoptimizing the method had caused the floating point math to flush to 32-bit float representation more often, truncating the additional precision. The infinite loop ended up being caused by that additional precision!
Best of all, the other platform I was writing the code for at the time (PowerPC) didn't exhibit the bug, because its floating-point implementation doesn't add additional precision.
Had to deal with accumulating noise and unpredictable rounding in a double precision supported application.
Ended up scaling the amounts to integers, then converting those known and stable amounts to _character strings_ then doing the multiplication and division by character manipulation!
Don't laugh too hard; performance wasn't a consideration but _decimal_ accuracy was. It did the job.
Rules for using floats:
1. Don't use floats. 2. Never use big floats. 3. WHY ARE YOU USING FLOATS!?
Surely just casting to double is going to be a far more practical and efficient solution than using a global volatile float ?
The problem of extra precision provided randomly by the compiler for floats should be fixed by just casting to a double, ensuring that you always use the extra precision.
"Floats are terrifying. Maybe not as terrifying as unicode, but pretty darn terrifying."
Good. At least someone else than Schneier realizes Unicode is a major PITA from a correctness and security point of view. Schneier said it best: "Unicode is too complex to ever be secure".
Back to floats. It's nearly always a mistake to be using floats unless you're working on scientific computation (and 3D and game physics a are a kind of scientific computation).
If you don't know how to compute error propagation: floats aren't for you. There. As simple as that.
It's interesting to realize that most comments here do not adress that fundamental issue: it's not a particular bug in fp that's the problem. It's the overuse of fp itself that is the problem.
I can give you one the simple example of a f^cktarded use of floats: in a gigantic spec (HTML) someone decided there would be a probability expressed as a number between 0 and 1 with up to three decimal points. This is totally f^cktarded and lots of people complained that it was a retarded thing to do. You should express this as a number between 0 and 999. Why? Because you know that otherwise clueless programmers are going to make mistakes while using floating point numbers where they shouldn't.
And surely some programmers did. Some clueless monkeys in the Tomcat codebases decided it was a good idea to parse a number which was know to have at most three digits after the dot using... Java's built-in floating point parsing number library. That's silly of course: when a f^cktarded spec talks about a number between 0.000 and 0.999 you parse it manually into integers and do integers maths but I digress.
So what did happen? Some people realized they could throw Java's floating-point parsing number library into an infinite loop and that's it: the most deadly Java remote DoS exploit to attack website. By faking one browser request you could throw one thread on a webapp server in an infinite loop. Rinse and repeat and a single machine could DoS an entire server farm.
Due to stupid programmers using floating-point numbers when they shouldn't.
The problem is exacerbated by clueless people everywhere thinking floating-point numbers are a good idea. They're not. They should die a horrible days and the days where CPU didn't have floating-point numbers were arguably better days for 99.999% of what should our use.
Thankfully there's one area were even f^cktarded monkeys aren't using floating-point numbers: cryptography.
I'd say that unless you can write Goldberg's "What every computer scientist should know about floating-point numbers" then you shouldn't be using floating-point numbers. As simple as that.