How can I understand this another floating-point paradox --- 0.1 represented as double is more accurate than 0.1 represented as long double?
In [134]: np.double(0.1)
Out[134]: 0.1
In [135]: np.longdouble(0.1)
Out[135]: 0.10000000000000000555
How can I understand this another floating-point paradox --- 0.1 represented as double is more accurate than 0.1 represented as long double?
In [134]: np.double(0.1)
Out[134]: 0.1
In [135]: np.longdouble(0.1)
Out[135]: 0.10000000000000000555
It's not more accurate. The longdouble repr is just showing you more of the inaccuracy that was already present.
0.1is a Python float, which has the same precision asnumpy.double. It does not represent the exact decimal value 0.1, because binary cannot represent that value in a finite number of bits.0.1represents this value:which is the closest value to 0.1 that can be represented within the limits of the type's precision.
When you construct a
numpy.doubleornumpy.longdoublefrom0.1, this is the value you get. Fornumpy.longdouble, this is not the best approximation of 0.1 the type could store.The
reprof bothnumpy.doubleandnumpy.longdoubleshow the minimum number of decimal digits needed to produce an output that will reproduce the original value if converted back to the original type. Fornumpy.double, that's just"0.1", because0.1was already the closest double-precision floating point value to 0.1. Fornumpy.longdouble, it requires more digits, becausenumpy.longdoublehas more precision, so it can represent values closer to 0.1 than0.1.If you want the best long double approximation of 0.1, you should pass a string instead of a Python float: