Skip to main content

LDL Decomposition with Python

I recently wrote a post on the Cholesky decomposition of a matrix. You can read more about the Cholesky decomposition here; https://harrybevins.blogspot.com/2020/04/cholesky-decomposition-and-identity.html. A closely related and more stable decomposition is the LDL decomposition which has the form,

$\textbf{Q} = \textbf{LDL*}$,

where $\textbf{L}$ is a lower triangular matrix with diagonal entries equal to 1, $\textbf{L*}$ is it's complex conjugate and $\textbf{D}$ is a diagonal matrix. Again an LDL decomposition can be performed using the Scipy or numpy linear algebra packages but it is a far more rewarding experience to write the code. This also often leads to a better understanding of what is happening during this decomposition.

The relationship between the two decompositions, Cholesky and LDL, can be expressed like so,

$\textbf{Q} = \textbf{LDL*} = \textbf{LD}^{1/2}(\textbf{D})^{1/2}\textbf{*L*} = \textbf{LD}^{1/2}(\textbf{LD}^{1/2})\textbf{*}$.

A simple way to calculate the LDL decomposition from the Cholesky decomposition is therefore then to say that if $\textbf{S}$ is a diagonal matrix containing the diagonal elements of $\textbf{L}_{cholesky}$ then,

$\textbf{D} = \textbf{S}^2$
$\textbf{L} = \textbf{L}_{cholesky}\textbf{S}^{-1}$.

However, the Cholesky algorithm involves taking a square root which can be avoided by building the LDL decomposition with recurrence relations. Written out as matrices the decomposition has the following form,

$\textbf{Q} = \textbf{LDL*} =
\begin{pmatrix}
1 & 0 & 0 \\
L_{21} & 1 & 0 \\
L_{31} & L_{32} & 1
\end{pmatrix}
\begin{pmatrix}
D_{1} & 0 & 0 \\
0 & D_{2} & 0 \\
0 & 0 & D_{3}
\end{pmatrix}
\begin{pmatrix}
1 & L_{21} & L_{31} \\
0 & 1 & L_{32} \\
0 & 0 & 1
\end{pmatrix}$

By multiplying this out we can see that,

$\textbf{Q} =
\begin{pmatrix}
D_{1} & L_{21}D_{1} & L_{31}D_{1} \\
L_{21}D_{1} & L_{21}^2 D_{1} + D_{2} & L_{31}L_{21}D_{1} + L_{32}D_2\\
L_{31}D_{1} & L_{31}L_{21}D_{1} + L_{32}D_2 & L_{31}^2D_{1} + L_{32}^2D_{2} + D_3
\end{pmatrix}$

which by comparison to the elements of $\textbf{Q}$ leads to the following recurrence relations,

$D_j = Q_{jj} - \sum_{k=1}^{j-1} L_{jk}^2 D_k$,
$L_{ij} = \frac{1}{D_j} \bigg( Q_{ij} - \sum_{k=1}^{j-1}L_{ik}L_{jk}D_k\bigg) \textrm{ for } i > j$.

These are fairly straightforward relationships that can be implemented easily in Python.  In order to re-use this code elsewhere I have defined it in a class as follows,

import numpy as np

class ldl(object):
    #ldl decomposition
    def __init__(self, Q):
        self.Q = Q
        self.L, self.D = self.calc()

    def calc(self):
        L = np.zeros(self.Q.shape)
        D = np.zeros(len(self.Q))
        for i in range(L.shape[1]):
            for j in range(L.shape[0]):
                D[j] = (self.Q[j, j] - np.sum([L[j, k]**2*D[k]
                        for k in range(j)]))
                if i == j:
                    L[i, j] = 1
                if i > j:
                    L[i, j] = ((1/D[j])*(self.Q[i, j] -
                        np.sum([L[i, k]*L[j, k]*D[k] for k in range(j)])))
        D = D * np.identity(len(self.Q))
        return L, D

I then proceed to test this code. I use the same code to generate $\textbf{Q}$ as I did in my demonstration of the Cholesky decomposition,

x = np.linspace(40, 120, 100)
x_ = x/x.max()
N = 6

phi = np.empty([len(x), N])
for i in range(len(x)):
    for l in range(N):
        phi[i, l] = x_[i]**(l)

Q = phi.T @ phi

N here defines the dimensions of $\textbf{Q}$ which is a positive definite matrix and as discussed in my previous article 'phi' is the basis functions for a polynomial model I had been using in a wider application of the Cholesky decomposition. I can then call my class acting on $\textbf{Q}$ and calculate $\textbf{L}$ and $\textbf{D}$ which are class attributes with the following code,

LD = ldl(Q)
L, D = LD.L, LD.D

To check everything is working properly I can print the difference between $\textbf{Q}$ and $\textbf{LDL*}$ which I correctly find to be 0 for all entries,

print( Q - L @ D @ L.T)

We can also plot the various components of the decomposition as a visual aid and this can be seen below.
Again this is a fun little task that can demonstrate the differences and similarities between a Cholesky decomposition and an LDL decomposition. It also serves as means for a better understanding of a linear algebra function that is often called but not understood entirely.

As always the code is available at; https://github.com/htjb/Blog/tree/master/LDLDecomposition. I have included also in this repository a modified version of the Cholesky function I previously wrote to demonstrate the mathematics at the beginning of this post and the close relationship between the two operations.

Thanks for reading!

Comments

Popular posts from this blog

Random Number Generation: Inverse Transform Sampling with Python

Following on from my previous post, in which I showed how to generate random normally distributed numbers using the Box-Muller Transform, I want to demonstrate how Inverse Transform Sampling(ITS) can be used to generate random exponentially distributed numbers. The description of the Box-Muller Transform can be found here:  https://astroanddata.blogspot.com/2020/06/random-number-generation-box-muller.html . As discussed in my previous post random numbers appear everywhere in data analysis and knowing how to generate them is an important part of any data scientists tool box. ITS takes a sample of uniformly distributed numbers and maps them onto a chosen probability density function via the cumulative distribution function (CDF). In our case the chosen probability density function is for an exponential distribution given by, $P_d(x) = \lambda \exp(-\lambda x)$. This is a common distribution that describes events that occur independently, continuously and with an average constant rate...

Contour Plotting with Matplotlib

Visualising data well is an important part of any analysis and a good handle on the Python package Matplotlib is essential for any Python data analyst. I hope to provide a few tutorials on some of the more complex concepts in data visualisation and how to produce clear and tidy graphs with Matplotlib. I will assume some basic knowledge of the Matplotlib package but will try and explain the code as clearly as possible. Comments are always welcome! I am going to begin with this piece on contour plotting which is an area I have a particular interest in. Specifically I am interested in plotting parameter spaces for fitted functions with contours defined by an objective function like $\chi^2$. We will get to an example of this shortly but I first want to look at how we make contour plots with a simpler example. Basic Example with Radii For our simple example we will define variables $x$ and $y$ over a given range and plot the corresponding radius, $Z$ from 0 for each data point $(x, y)$ as ...