Floating point numbers include decimals as they are.
Integers
always round down, not to the nearest whole number.
If you convert a int to float, decimals will all be zeroes (.00...).
If you convert a float to int, decimals will be removed. (Rounded down)
Its good programming practice to avoid using floats (also double's, outside nwn) unless absolutely necessary (pi, square roots, etc). When decimals are in the play, there is a good chance that your calculations go off thanks to roundoff errors caused by binary madness.
Modifié par Xardex, 07 décembre 2011 - 05:53 .