Skip to main content

Random Number Generation: Box-Muller Transform

Knowing how to generate random numbers is a key tool in any data scientists tool box. They appear in multiple different optimisation routines and machine learning processes. However, we often use random number generators built into programming languages without thinking about what is happening below the surface. For example in Python if I want to generate a random number uniformally distributed between 0 and 1 all I need to do is import numpy and use the np.random.uniform() function. Similarly if I want gaussian random numbers to for example simulate random noise in an experiment all I need to do is use np.random.normal(). But what is actually happening when I call these functions? and how do I go about generating random numbers from scratch?

This is the first of hopefully a number of blog posts on the subject of random numbers and generating random numbers. There are multiple different methods that can be used in order to do this such as the inverse probability transform method and I will talk about this in future posts.

Today I will look at a method for generating random normally distributed numbers called the Box-Muller Transform. In order to use the Box-Muller Transform I need to have a method for generating uniform random numbers. For the purposes of this post I will cheat and use numpy's built in uniform random number generator with the intention to write my own code in the future!

With this in mind I am going to define 'my' uniform random number generator in a code and class of its own. I have called the file 'uniform_dist.py' and in it I write the following,
import numpy as np

class uniform():
    def __init__(self, size, low=0, high=1):
        self.rand = np.random.uniform(low, high, size)

Techincally because I am using numpy's uniform random number generator the normally distributed random numbers I will be generating here are pseudorandom. Pseudorandom numbers are predictable outcomes from an algorithm. However, the information needed in order to predict the series of randomly generated numbers is not easily accessed and the numbers to all intense and purposes are random in nature.

The Box-Muller Transform maps two samples from uniform distributions onto a normal distribution with a mean, $\mu$ of 0 and a standard deviation,$\sigma$ of 1. For clarity a normal distribution is given by,

$P(x) =  \frac{1}{\sigma \sqrt{2\pi}}\exp\bigg(-\frac{1}{2}\bigg(\frac{x-\mu}{\sigma}\bigg)^2\bigg)$.

The algorithm takes the two randomly generated uniformly distributed number $U_1$ and U_2$ and calculates the normally distributed numbers like so,

$Z_1 = R \cos(\theta) = \sqrt{-2\ln(U_1)}\cos(2\pi U_2)$,

and,

$Z_2 = R \sin(\theta) = \sqrt{-2\ln(U_1)}\sin(2\pi U_2)$,

where,

$R^2 = -2 \ln(U_1)$ and $\theta = 2\pi U_2$.

Recall that this gives us the standard normal distribution with $\mu = 0$ and $\sigma = 1$. In order to translate this to a normal distribution with an arbitrary mean and standard deviation we use the following,

$r_1 = Z_1\sigma + \mu$.
$r_2 = Z_2\sigma + \mu$.

We can code this into another Python file which I have called 'normal_dist.py' and call our uniform random number generator using,
import numpy as np
from uniform_dist import uniform

class normal():
    def __init__(self, mean, standard, size):
        
        x1 = uniform(size).rand
        x2 = uniform(size).rand

        R = np.sqrt(-2*np.log(x1))
        theta = 2*np.pi*x2
  
        r1 = R*np.sin(theta)*standard + mean
        r2 = R*np.cos(theta)*standard + mean
        self.random = r1
Here the normal() class is set up in such a way that the user can provide a mean and standard deviation of their choice and request a sample of random numbers of a given size. The function only returns one of the two sets of randomly generated numbers, $r_1$ and $r_2$ that are produced by this transformation. More sophisticated implementations divide up the requested sample size so that $r_1$ and $r_2$ correspond to each half (with an additional spare value if the requested size is an odd integer).

The Box-Muller Transform is derived from the joint distribution of two independent normally distributed (with $\mu = 0$ and $\sigma = 1$) parameters, X and Y in cartesian space which is given by,

$P(X, Y) = P(X)P(Y) = \frac{1}{\sqrt{2\pi}}\exp\bigg(-\frac{X^2}{2}\bigg) \frac{1}{\sqrt{2\pi}}\exp\bigg(-\frac{Y^2}{2}\bigg)$,

which simplifies to,

$P(X, Y) = \frac{1}{2\pi}\exp\bigg(-\frac{X^2 + Y^2}{2}\bigg)$.

Now recognising that $R^2 = X^2 + Y^2$ where $R$ is a radius in polar coordinates we see that,

$P(R, \theta) = \frac{1}{2\pi}\exp\bigg(-\frac{R^2}{2}\bigg)$.

If we say that our uniformly distributed numbers $U_1$ and $U_2$ translate to $\theta$ and $R$ via,

$U_1 = \frac{\theta}{2\pi} \Rightarrow \theta = 2\pi U_1$,
$U_2 = \exp\bigg(-\frac{R^2}{2}\bigg) \Rightarrow R = \sqrt{-2\ln(U_2)}$.

Starting to look familiar no? Here we see that $\theta$ is producing a uniform distribution of numbers across the range of angles $0 - 2\pi$ and $R$ produces normally distributed radii in polar coordinates. All that is left to do is convert this back from polar coordinates to the to the required normally distributed variables by the definitions $X = R\cos(\theta)$ and $Y = R\sin(\theta)$ with $X$ and $Y$ corresponding to $Z_1$ and $Z_2$.

We can see the algorithm in action by calling the class defined above and accessing the attribute 'random' like so,
random = normal(0, 5, 20000).random
When the 20000 random numbers are plotted as a histogram alongside an appropriately scaled normal distribution that has a mean of 0 and a standard deviation of 5 we produce the graph below. This verifies that the code is working and the code itself can be found on my github at: https://github.com/htjb/Blog/tree/master/random_numbers.

Thanks for reading!


Comments

Popular posts from this blog

LDL Decomposition with Python

I recently wrote a post on the Cholesky decomposition of a matrix. You can read more about the Cholesky decomposition here;  https://harrybevins.blogspot.com/2020/04/cholesky-decomposition-and-identity.html . A closely related and more stable decomposition is the LDL decomposition which has the form, $\textbf{Q} = \textbf{LDL*}$, where $\textbf{L}$ is a lower triangular matrix with diagonal entries equal to 1, $\textbf{L*}$ is it's complex conjugate and $\textbf{D}$ is a diagonal matrix. Again an LDL decomposition can be performed using the Scipy or numpy linear algebra packages but it is a far more rewarding experience to write the code. This also often leads to a better understanding of what is happening during this decomposition. The relationship between the two decompositions, Cholesky and LDL, can be expressed like so, $\textbf{Q} = \textbf{LDL*} = \textbf{LD}^{1/2}(\textbf{D})^{1/2}\textbf{*L*} = \textbf{LD}^{1/2}(\textbf{LD}^{1/2})\textbf{*}$. A simple way to calcu

Random Number Generation: Inverse Transform Sampling with Python

Following on from my previous post, in which I showed how to generate random normally distributed numbers using the Box-Muller Transform, I want to demonstrate how Inverse Transform Sampling(ITS) can be used to generate random exponentially distributed numbers. The description of the Box-Muller Transform can be found here:  https://astroanddata.blogspot.com/2020/06/random-number-generation-box-muller.html . As discussed in my previous post random numbers appear everywhere in data analysis and knowing how to generate them is an important part of any data scientists tool box. ITS takes a sample of uniformly distributed numbers and maps them onto a chosen probability density function via the cumulative distribution function (CDF). In our case the chosen probability density function is for an exponential distribution given by, $P_d(x) = \lambda \exp(-\lambda x)$. This is a common distribution that describes events that occur independently, continuously and with an average constant rate, $\