Skip to main content

Posts

Showing posts from April, 2020

LDL Decomposition with Python

I recently wrote a post on the Cholesky decomposition of a matrix. You can read more about the Cholesky decomposition here;  https://harrybevins.blogspot.com/2020/04/cholesky-decomposition-and-identity.html . A closely related and more stable decomposition is the LDL decomposition which has the form, $\textbf{Q} = \textbf{LDL*}$, where $\textbf{L}$ is a lower triangular matrix with diagonal entries equal to 1, $\textbf{L*}$ is it's complex conjugate and $\textbf{D}$ is a diagonal matrix. Again an LDL decomposition can be performed using the Scipy or numpy linear algebra packages but it is a far more rewarding experience to write the code. This also often leads to a better understanding of what is happening during this decomposition. The relationship between the two decompositions, Cholesky and LDL, can be expressed like so, $\textbf{Q} = \textbf{LDL*} = \textbf{LD}^{1/2}(\textbf{D})^{1/2}\textbf{*L*} = \textbf{LD}^{1/2}(\textbf{LD}^{1/2})\textbf{*}$. A simple way to calcu

Optimisation: Basic Basinhopping

Basinhopping optimisation routines are a family of routines designed to find the global minimum in the presence of local minima. The basic principle behind a basinhopping routine is to randomly perturb initial parameters at each iteration from the current best set and perform a local minimisation routine at each starting point. The best parameters are updated as the routine progresses if a local minimum is found with a smaller evaluation of the objective function than the current best. In this post I want to focus on the basinhopping side of the algorithm rather than the local minimisation. I am therefore going to use the steepest descent algorithm I built in the following post to perform local minimisation:  https://harrybevins.blogspot.com/2020/04/optimisation-by-descent.html . There are a large number of different local minimisation routines that can be used however including a Gradient Descent or for example a Nelder Mead routine. A simple basinhopping routine is characteri

Cholesky Decomposition and the Identity Matrix

The Cholesky decomposition of a matrix is a useful tool which can be used to transform a matrix into the identity matrix. My interest in this has stemmed from such a use in which to normalise an optimisation problem I am required to remove the complexity from my data matrix. A Cholesky decomposition of a positive-definite matrix Q of dimensions (nXn) has the form, $Q = LL^*$ where $L^*$ is the conjugate transpose of L and L is a lower triangular matrix composed of real and positive diagonal entries. This decomposition can be achieved easily in Python with both the numpy.linalg and scipy.linalg packages as, L = np.linalg.cholesky(Q) Transforming Q into the identity matrix then follows from, $Q' = L^{-1} Q (L^{-1})^{T}$, and in Python we can write this as, A = np.linalg.inv(L) Q_ = A @ Q @ A.T This is a simple enough task that for low order n is capable of returning the identity matrix. Depending on the scale of the entries in Q I have found that the accuracy of t

Optimisation: Descent Algorithm

There are a large number of different optimisation routines being used for fitting data sets and some are more effective than others. A simple descent based algorithm can be produced using Python and I aim in this post to describe how this can be done. I will also consider the advantages and disadvantages of this method. The basic idea behind this algorithm is to step around the parameter space of our model and move in a direction that best improves the value of our objective function until no further improvement can be made. In this instance I will be using $X^2$ as my objective function which is given by, $X^2 = \sum_i(y_i - y_{fitted, i})^2.$ The stepping routine can be defined in many different ways. For example it can be defined based on the derivative of the fitted model, known as a Gradient Descent, or on a continually updated parameter step with checks in every direction in the parameter space. For this example I will use a fixed parameter step but the code can be easily

Estimating Pi with Python

π is one of the fundamental constants in mathematics. It is an irrational number and it's value has been calculated up to an extremely large number of digits. In most cases a value of 3.14 will suffice as an approximation but there are many ways in which it can be approximated to a higher degree of accuracy. A particularly fun and simple example can be completed with Python in approximately 15 lines of code. For this little project I am going to use numpy and I begin by importing this Python module as np, import numpy as np If we considering a quarter circle with unit radius we can use N uniform randomly generated points between 0 and 1 to estimate it's area. We can generate our circle for plotting later using the following code, x = np.linspace(0, 1, 100) y = np.sqrt(1 - x**2) Our N randomly generated points are produced like so, r_x = [] r_y = [] for i in range(N): r_x.append(np.random.uniform(0, 1)) r_y.append(np.random.uniform(0, 1)) Two examples of th