I recently came across a problem with floating point numbers. I know there are some inherent inaccuracies, but I was surprised to find a problem with a relatively simple number.
Floats are numbers stored as two parts: a whole number and an exponent. Generally you're dealing with numbers on similar scales, i.e. millimetres or kilometres, so this makes sense.
The term "float" refers to the fact that the decimal point 'floats'. For instance the following are all different exponents with the same whole number:
- 1.1 is 11 x 10-1
- 0.15 is 15 x 10-2
- 1.5 is 15 x 10-1
- 15000.0 is 15 x 103
Simple enough, but I had a bug with 1.1 - I was getting 1.0999999999989! Odd issue until you figure in that computers think in binary. I didn't expect this to be a problem because both the value and the exponent are whole numbers. What I'd failed to realise is that the floating point was also base-2, not base-10. So the floats above are actually:
- 1.1 is 154811237190861 x 2-47
- 0.15 is 168884986026394 x 2-50
- 1.5 is 3 x 2-1
- 15000.0 is 1875 x 23
Those horrible big numbers for 0.15 and 1.1 are too big for floats to handle. The issue wasn't really the inaccuracy - I expected the float to only be able to deal with a few significant figures. What was a surprise (and shouldn't have been) was that the numbers it has problems with are not the ones I expect it to. To floats 1.1 is an irrational number.
This normally isn't a problem, for most of the sort of calculations where floating point numbers are used a this inaccuracy is worth the fact that the calculation is quicker.
Add a comment