Cholesky factorization is the process of finding a matrix L, for a given matrix A, such that:
A = L LTwhere LT is the transpose of matrix L. Such a factorization is possible if and only if A is symmetric positive definite. A real matrix A is positive definite if
xT A x > 0for all non-zero vectors x. Since A and L are both symmetric, only a triangular matrix is stored. By convention, we will store only the lower half and the diagonal. This is called lower-triangular matrix.
An easy way to generate a symmetric positive definite matrix is to start with a lower-triangular matrix, L, and multiply it with its transpose to get A. Alternatively, as long as every value in the principal diagonal of a symmetric matrix is positive and greater than the sum of the absolute values of all the elements in the corresponding rows and columns the matrix is guaranteed to be positive definite. In other words:
Aii = 2×Σ|Aki|, k ≠ i, 1 ≤ k ≤ n, where |x| denotes the absolute value of x.
The basic idea behind factorizing a matrix A is to divide it into three parts:
where
A =
Notice, that we only operate on the lower half of A since it is symmetric. Each color block represents one element in the matrix.
= A11 = A21 = A22
Next, perform the following three steps.
Notice that there are opportunities for parallelization for updating both A21 and A22 in each iteration of the above algorithm. Of course, the amount of available parallelism in the update of A22 is an order of magnitude higher than that available for A21.
Cholesky factorization is a very common linear algebra operation. Several online references are easily available, e.g., a discussion of basic and blocked Cholesky factorization at University of Texas, Austin, and another discussion with reference to the BLAS (Basic Linear Algebra Subroutines) library at University of Tennessee, Knoxville. Many online discussions talk about sparse matrices. We are only concerned about dense matrices for this assignment.
B629, Arun Chauhan, Department of Computer Science, Indiana University