By default, numbers having a decimal point are represented as floating point numbers.

Verification:

```
print(type(1.5))
#output
<class 'float'>
```

Floating point numbers are represented in computer hardware in base 2. Floating point numbers are used to represent decimal (base 10) numbers in the most common use cases. But most decimal fractions cannot be represented exactly as binary fractions.

To understand this problem, consider the conversion of a decimal fraction to a decimal number: `1/3`

is equal to `0.333...`

with repeating 3’s. When represented as a decimal number, it will never be equal to `1/3`

.

Similarly, many decimal numbers cannot be represented accurately in base 2.

Some examples:

```
print("0.1 + 0.2 = {}".format(0.1+0.2))
#output
0.1 + 0.2 = 0.30000000000000004
print("0.1 + 0.4 = {}".format(0.1+0.4))
#output
0.1 + 0.4 = 0.5
print("1/10 = {}".format(1/10))
#output
1/10 = 0.1
print("Is (0.1 + 0.1 + 0.1 == 0.3) ?")
print(0.1 + 0.1 + 0.1 == 0.3)
#output
Is (0.1 + 0.1 + 0.1 == 0.3) ?
False
```

In the above examples, notice that some decimal numbers can be represented accurately in base 2 while others cannot be represented accurately.

I won’t pretend that I understand the intricate details behind why this happens, so I will link to some resources if you are interested to learn more: The Perils of Floating Point and What Every Computer Scientist Should Know About Floating-Point Arithmetic.

How can we overcome this issue in Python? Some solutions are:

- using
`round()`

function - using the
`decimal`

module - using the
`fractions`

module

Let me explain each of these solutions a little more:

Using `round()`

, floating point numbers can be rounded to a specified number of decimal places. If it’s used for post rounding a result (round after performing all operations on floats), working with floats will work as expected. Example:

```
print("Is (0.1 + 0.1 + 0.1 == 0.3) ?")
print(round(0.1 + 0.1 + 0.1, 5) == round(0.3, 5))
#output
Is (0.1 + 0.1 + 0.1 == 0.3) ?
True
```

The `round()`

function accepts a number as the first parameter and another number to specify the precision after the decimal point as the second parameter. Example:

```
print(round(10.33333, 3))
#output
10.333
```

**Edit:** It has been pointed out to me that using `round()`

may not be the best solution in certain cases where the rounded number will be used in calculations further in the program. Using `round()`

in such cases will lead to accuracy issues. An option is to use string formatting to display the required number of digits since this will not round the numbers. Syntax is as follows:

```
print('{: .3f}'.format(10.34123))
#output
10.341
```

Another alternative is to use the `decimal`

module when dealing with decimal numbers and accuracy is very important. Example:

```
from decimal import getcontext, Decimal
from math import pi
print(getcontext())
getcontext().prec = 5
print(Decimal(1)/Decimal(3))
getcontext().prec = 30
print(Decimal(pi))
#output
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=
[InvalidOperation, DivisionByZero, Overflow])
0.33333
3.141592653589793115997963468544185161590576171875
```

`getcontext()`

allows us to specify the precision and the rounding technique to be used, The default rounding technique is `ROUND_HALF_EVEN`

which rounds to nearest with ties going to nearest even integer.

To deal directly with fractions, the `fractions`

module can be used which supports rational number arithmetic. Example:

```
from fractions import Fraction
num1 = Fraction(2,3)
num2 = Fraction(1,3)
print("num1 = {} and num2 = {}".format(num1,num2))
print(num1 + num2)
print(num1 - num2)
print(num1*10)
print(num1/num2)
#output
num1 = 2/3 and num2 = 1/3
1
1/3
20/3
2
<class 'fractions.Fraction'>
```

There are multiple ways to construct fractions and the details can be found in the official documentation which is linked below.

Source code for today’s plog is here.

References: