17.2 ELL_Matrix Methods
The CÆSAR Code Package uses the ELL_Matrix class to contain and
manipulate algebraic matrices. The following discussion assumes that A and
B are matrices of dimension
M×N, ai, j is an element of A,
x is a vector of length N, and y is a vector of length M.
The CÆSAR Code Package contains procedures to calculate matrix norms and
matrix-vector products. In parallel, these operations require global
communication. Matrix-vector products (MatVecs) are defined by
Ax = y = ai, j xj .
|
(17.9) |
A matrix norm is any scalar-valued function on a matrix, denoted by the
double bar notation
A
, that satisfies these properties:
A 0 |
, |
A + B A + B , |
|
sA = s![$\displaystyle \left.\vphantom{ s }\right\vert$](img149.gif) A , |
. |
|
(17.10) |
A frequently used matrix norm is the Frobenius norm:
The general class of matrix norms known as the p-norms are defined in
terms of vector norms by:
In particular, the 1 and
norms are the most easily calculated:
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{1}^{}$](img187.gif) |
= |
![$\displaystyle \max_{{1 \leq j \leq N}}^{}$](img188.gif) ![$\displaystyle \sum_{{i=1,M}}^{}$](img189.gif) ai, j (maximum absolute column sum), |
(17.13) |
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{{\infty}}^{}$](img192.gif) |
= |
![$\displaystyle \max_{{1 \leq i \leq M}}^{}$](img193.gif) ![$\displaystyle \sum_{{j=1,N}}^{}$](img167.gif) ai, j (maximum absolute row sum). |
(17.14) |
In general, p-norms are difficult to calculate. The 2-norm can be
shown to be:
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
= |
, |
(17.15) |
|
= |
, , |
(17.16) |
where AH is the Hermitian (or complex conjugate transpose, or adjoint)
of A. Calculating the 2-norm of a matrix requires an iterative
procedure, and is not currently done in CÆSAR (only Frobenius, p = 1
and p =
norms are calculated). However, CÆSAR calculates an
estimate of the 2-norm as a range (or the middle of the range) using the
following relationships:
![$\displaystyle {\frac{{1}}{{\sqrt{N}}}}$](img198.gif) A![$\displaystyle \left.\vphantom{ A }\right\Vert _{F}^{}$](img179.gif) |
![$\displaystyle \leq$](img132.gif) |
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
![$\displaystyle \leq$](img132.gif) |
A , |
![$\displaystyle \max\limits_{{i,j}}^{}$](img199.gif) ai, j![$\displaystyle \left.\vphantom{ a_{i,j} }\right\vert$](img191.gif) |
![$\displaystyle \leq$](img132.gif) |
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
![$\displaystyle \leq$](img132.gif) |
![$\displaystyle \sqrt{{MN}}$](img200.gif) ![$\displaystyle \max\limits_{{i,j}}^{}$](img199.gif) ai, j , |
![$\displaystyle {\frac{{1}}{{\sqrt{N}}}}$](img198.gif) A![$\displaystyle \left.\vphantom{ A }\right\Vert _{{\infty}}^{}$](img192.gif) |
![$\displaystyle \leq$](img132.gif) |
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
![$\displaystyle \leq$](img132.gif) |
![$\displaystyle \sqrt{{M}}$](img201.gif) A , |
![$\displaystyle {\frac{{1}}{{\sqrt{M}}}}$](img202.gif) A![$\displaystyle \left.\vphantom{ A }\right\Vert _{1}^{}$](img187.gif) |
![$\displaystyle \leq$](img132.gif) |
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
![$\displaystyle \leq$](img132.gif) |
![$\displaystyle \sqrt{{N}}$](img163.gif) A , |
|
|
A![$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$](img194.gif) |
![$\displaystyle \leq$](img132.gif) |
. |
|
(17.17) |
For a similar but more complete discussion of matrix operations see
Golub & Van Loan (1989).
Michael L. Hall