> x == 0 (or even like x <= 0) is a strong code smell.
Careful: 0.0f is defined by the standard to be 0x0. So such comparisons can be quite reasonable.
However trying for 0 as the result of a computation, as is used in this example is indeed problematic.
But there is plenty of reasonable code that initializes a float or double variable to 0.0 and can expect to be able to inspect it and find its bit pattern to be 0x0 before that variable is manipulated. After that, all bets are off.
The point isn't just that you can't tell exactly when something is zero, although indeed you often can't.
It's that if you have some calculation that needs to be special-cased at zero (e.g., because you're dividing by something and it might explode) then it probably actually needs to be special-cased near zero (because near zero it might overflow or run into other numerical problems), and it's probably worth seeing if there's another way of organizing the calculation that doesn't misbehave for small values in the first place.
Careful: 0.0f is defined by the standard to be 0x0. So such comparisons can be quite reasonable.
However trying for 0 as the result of a computation, as is used in this example is indeed problematic.
But there is plenty of reasonable code that initializes a float or double variable to 0.0 and can expect to be able to inspect it and find its bit pattern to be 0x0 before that variable is manipulated. After that, all bets are off.