Explore how computers represent decimal numbers in binary floating point
The IEEE 754 Formula
value = (-1)sign×2(exponent - bias)× (1 + mantissa)
Sign bit: 0 = positive, 1 = negative Exponent: Biased by 127 (32-bit) or 1023 (64-bit) Mantissa: Implicit leading 1 for normalized numbers
Denormalized: value = (-1)s×2(1 - bias)× (0 + mantissa)
pi (3.14159...)
e (2.71828...)
0.1 (inexact!)
0.2 (inexact!)
0.1 + 0.2
+Infinity
-Infinity
NaN
-0
Max float32
Min normal f32
Min denorm f32
Max float64
Min positive f64
1.0
-1.0
0.0
32-bit Single Precision (float)
Sign (1)
Exponent (8)
Mantissa (23)
64-bit Double Precision (double)
Sign (1)
Exponent (11)
Mantissa (52)
Special Values Reference
Value
Sign
Exponent
Mantissa
Condition
+0
0
all zeros
all zeros
Zero
-0
1
all zeros
all zeros
Negative zero
+Infinity
0
all ones
all zeros
Overflow / division by zero
-Infinity
1
all ones
all zeros
Negative overflow
NaN
0/1
all ones
non-zero
Undefined operations (0/0, sqrt(-1))
Denormalized
0/1
all zeros
non-zero
Very small numbers (gradual underflow)
Denormalized numbers have no implicit leading 1 in the mantissa, allowing representation of values smaller than the minimum normal number at the cost of precision.
Why 0.1 + 0.2 != 0.3
In base 10, 1/3 = 0.333... repeats forever.
Similarly, in base 2, 1/10 = 0.0001100110011... repeats forever.
Since IEEE 754 has finite bits, 0.1 is rounded to the nearest representable value.
The same happens with 0.2. When added, the rounding errors compound:
0.1 + 0.2 = 0.30000000000000004
This is not a bug -- it is a fundamental consequence of binary floating point.
Click the 0.1 and 0.1 + 0.2 presets above to see the bit patterns.