Skip to main content

Estimating Pi with Python

π is one of the fundamental constants in mathematics. It is an irrational number and it's value has been calculated up to an extremely large number of digits. In most cases a value of 3.14 will suffice as an approximation but there are many ways in which it can be approximated to a higher degree of accuracy. A particularly fun and simple example can be completed with Python in approximately 15 lines of code.

For this little project I am going to use numpy and I begin by importing this Python module as np,

import numpy as np

If we considering a quarter circle with unit radius we can use N uniform randomly generated points between 0 and 1 to estimate it's area. We can generate our circle for plotting later using the following code,

x = np.linspace(0, 1, 100)
y = np.sqrt(1 - x**2)

Our N randomly generated points are produced like so,

r_x = []
r_y = []
for i in range(N):
    r_x.append(np.random.uniform(0, 1))
    r_y.append(np.random.uniform(0, 1))

Two examples of this are shown below for N = 100 and N = 10000 randomly generated points. The orange line shows the parameter of  our quarter circle.


By summing up the number of points inside the quarter circle we can approximate it's area and it is clear that increasing the value of N improves the accuracy of our estimate. We perform this sum by the following,

counts_inside = 0
counts_outside = 0
for i in range(len(r_x)):
    distance = r_x[i]**2 + r_y[i]**2
    if distance <= 1:
        counts_inside += 1
    else:
        counts_outside +=1

Where distance delineates the radius of a circle drawn through each random point centered on the origin. If this distance exceeds 1 then the point is outside our quarter circle.

Now we note that the area of our quarter circle is given by π/4 since our radius is 1. Consequently we can evaluate π using the following formula,

$\frac{A_{quarter~~circle}}{A_{unit~~square}} = \frac{counts\_inside}{counts\_inside + counts\_outside}$

which becomes,

$\pi = 4 \frac{counts\_inside}{counts\_inside + counts\_outside}$

and in Python this translates too,

pi_estimate = 4*(counts_inside/(counts_outside+counts_inside))

As N increases the estimate of the area of both the unit square and the quarter circle become more accurate and consequently so does the estimate of $\pi$. This is shown in the left hand figure below where the blue line tracks the estimate of $\pi$ as N increases. The black line is numpy's stored value of $\pi$ which is commonly used. The right hand figure shows the ratio of the numpy value and the estimate calculated using the quarter circle which tends to 1 with increasing N. For $N = 10^7$ this method approximates $\pi$ to be 3.1414856 which to 4 decimal places matches the accepted value!
The full code including that used to produce the figures can be found at https://github.com/htjb/Blog/tree/master/Estimating_Pi_with_Python.

Comments

Popular posts from this blog

Random Number Generation: Inverse Transform Sampling with Python

Following on from my previous post, in which I showed how to generate random normally distributed numbers using the Box-Muller Transform, I want to demonstrate how Inverse Transform Sampling(ITS) can be used to generate random exponentially distributed numbers. The description of the Box-Muller Transform can be found here:  https://astroanddata.blogspot.com/2020/06/random-number-generation-box-muller.html . As discussed in my previous post random numbers appear everywhere in data analysis and knowing how to generate them is an important part of any data scientists tool box. ITS takes a sample of uniformly distributed numbers and maps them onto a chosen probability density function via the cumulative distribution function (CDF). In our case the chosen probability density function is for an exponential distribution given by, $P_d(x) = \lambda \exp(-\lambda x)$. This is a common distribution that describes events that occur independently, continuously and with an average constant rate...

LDL Decomposition with Python

I recently wrote a post on the Cholesky decomposition of a matrix. You can read more about the Cholesky decomposition here;  https://harrybevins.blogspot.com/2020/04/cholesky-decomposition-and-identity.html . A closely related and more stable decomposition is the LDL decomposition which has the form, $\textbf{Q} = \textbf{LDL*}$, where $\textbf{L}$ is a lower triangular matrix with diagonal entries equal to 1, $\textbf{L*}$ is it's complex conjugate and $\textbf{D}$ is a diagonal matrix. Again an LDL decomposition can be performed using the Scipy or numpy linear algebra packages but it is a far more rewarding experience to write the code. This also often leads to a better understanding of what is happening during this decomposition. The relationship between the two decompositions, Cholesky and LDL, can be expressed like so, $\textbf{Q} = \textbf{LDL*} = \textbf{LD}^{1/2}(\textbf{D})^{1/2}\textbf{*L*} = \textbf{LD}^{1/2}(\textbf{LD}^{1/2})\textbf{*}$. A simple way to calcu...

5 Great Popular Science Books: Get started with Physics and Astrophysics

Popular science books are a great way to alleviate your curiosity in a relaxing way! I have spent many an hour reading some fantastic books and I want to share a few of these with you now. The following books serve as a good introduction to Physics, Astrophysics and some of the more complex concepts in the subject areas. I consider these 'must reads' for every enthusiast and professional. I hope you can enjoy them as much as I have! #1 - The Character of Physical Law By Richard Feynman Okay you should read as much of Feynman's writing as you can get you hands on. That goes without saying and explains why two of his books appear on this list! However, 'The Character of Physical Law' is a great place to start not only with Feynman's writing but also with Physics based popular science books in general. The book itself is based on a series of seven guest lectures given by Feynman at Cornell University in 1964. You can watch recordings of these lectures here:  https:...