Floating-point
In programming, a floating-point or float is a variable type that stores floating-point number values. A floating-point number is one where the position of the decimal point can "float" rather than being in a fixed position in a number. Examples of floating-point numbers are 1.23, 87.425, and 9039454.2. Different programming languages or systems may have different size limits or ways of defining floating-point numbers. Refer to the programming language documentation for details.
Floating-point numbers can be converted to binary and hexadecimal using the IEEE-754 converter.
Floating point errors
Floating point variables may have rounding or precision errors because decimal numbers are stored in a fixed memory amount that cannot represent all real numbers. When these are converted between binary and decimal, errors may be introduced.
Floating point errors become important when dealing with financial data, scientific data, large data sets, or when comparing float variables for equality. To prevent these errors, use a different data type, such as an integer, specific rounding functions, or implement error tolerance.
Below is a Python program that shows a computational error.
a = 0.1 b = 0.000001 print(f"a: {a}") print(f"a showing more decimal places: {a:.30f}") print(f"b: {b:f}") print(f"b showing more decimal places: {b:.30f}") sum = a + b print(f"{a} + {b:f} = {sum:.30f}")
This program code prints:
a: 0.1 a showing more decimal places: 0.100000000000000005551115123126 b: 0.000001 b showing more decimal places: 0.000000999999999999999954748112 0.1 + 0.000001 = 0.100001000000000006551204023708
Computer abbreviations, Data type, Exponent, f, Float, Floating-point notation, Floating-Point Unit, FLOPS, fp, FPU, Mantissa, Programming terms, Whole number