the.gray.fox wrote...
...
Bug 1) the float datatype
Sparing you the gory details of the floating point encoding / decoding
(you can read all about it if you google for IEEE 754 Single Precision Floating Point)
I tell you that in NWscript the Least Significant Bit in the float Mantissa is
"lost" in the act of encoding. And upon decoding it is always assumed to be 0,
which causes a small loss of precision in the final decoded value.
The Least Significant Bit (LSB) is bit 0, or the 1st bit, or the >> leftmost >> bit.
That is, the 1 in this pattern: 00000000 | 00000000 | 00000000 | 00000001
Said bit happens to be always 0 when you read from a float. And there is no
way to prevent the bug, other than not feeding to a float a value that makes use
of the LSB -- Which is not a practical solution.
Your best bet to dodge the problems caused by this is to not make use of
decimal values that want many digits of precision. The more digits you attempt
to retain, the heavier the precision loss sparked from that last 0 bit.
-
ODDLY enough, though, the NWscript float is correctly capable of storing and
retrieving any integer in the range -16777215 to +16777215 -- thus demonstrating
that the float bug strikes only when the Mantissa is employed to encode values
with a _decimal_ part whether it fits or not the 23 bits Mantissa storage.
-
Depending on the importance of your float values, it may be desirable to
pre-emptively convert them to an integer (multiplying by a proper power of 10)
so long it fits in the 23 bits range, and then assign them to the float variable.
I know it sounds extravagant -- well, extravagant solutions for extravagant bugs.
This will ensure that you retain (and can later retrieve) all your digits,
whatever the original value.
...
-fox
Encoded and Decoded by what. The only error I seen was the Encoding of the number by the compiler. When the VM was feed the correct number by editing the .NCS file it handled the numbers correctly. Also your argument here seems to be one of precision, The number I used was in fact more accurate then from the compiler then the one I could get using the rules of precision. It is really not that odd. When numbers are loaded to the FPU they are extended to 80 bits. The VM will also convert StringToFloat To the correct recision if you valued it over accuracy Either way floats are not accurate as you already know. The number given by the script compiler is no more inacurate then the one with more precision.
Unless of cource I missed something in your argument.
L8
Modifié par Lightfoot8, 21 novembre 2011 - 05:34 .