About the Project
1 Algebraic and Analytic MethodsTopics of Discussion

§1.2 Elementary Algebra

Contents
  1. §1.2(i) Binomial Coefficients
  2. §1.2(ii) Finite Series
  3. §1.2(iii) Partial Fractions
  4. §1.2(iv) Means
  5. §1.2(v) Matrices, Vectors, Scalar Products, and Norms
  6. §1.2(vi) Square Matrices

§1.2(i) Binomial Coefficients

In (1.2.1) and (1.2.3) and are nonnegative integers and . In (1.2.2), (1.2.4), and (1.2.5) is a positive integer. See also §26.3(i).

1.2.1

For complex the binomial coefficient is defined via (1.2.6).

Binomial Theorem

1.2.2
1.2.3
1.2.4
1.2.5

where is or according as is even or odd.

In (1.2.6)–(1.2.9) and are nonnegative integers and is complex.

1.2.6
1.2.7
1.2.8
1.2.9

See also §26.3.

§1.2(ii) Finite Series

Arithmetic Progression

1.2.10

where = last term of the series = .

Geometric Progression

1.2.11
.

§1.2(iii) Partial Fractions

Let be distinct constants, and be a polynomial of degree less than . Then

1.2.12

where

1.2.13

Also,

1.2.14

where

1.2.15

and is the -th derivative of (§1.4(iii)).

If are positive integers and , then there exist polynomials , , such that

1.2.16

To find the polynomials , , multiply both sides by the denominator of the left-hand side and equate coefficients. See Chrystal (1959a, pp. 151–159).

§1.2(iv) Means

The arithmetic mean of numbers is

1.2.17

The geometric mean and harmonic mean of positive numbers are given by

1.2.18
1.2.19

If is a nonzero real number, then the weighted mean of nonnegative numbers , and positive numbers with

1.2.20

is defined by

1.2.21

with the exception

1.2.22
and .
1.2.23
1.2.24

For , ,

1.2.25

and

1.2.26

The last two equations require for all .

§1.2(v) Matrices, Vectors, Scalar Products, and Norms

General Matrices

The full index form of an matrix is

1.2.27

with matrix elements , where , are the row and column indices, respectively. A matrix is zero if all its elements are zero, denoted . A matrix is real if all its elements are real.

The transpose of = is the matrix

1.2.28

the complex conjugate is

1.2.29

the Hermitian conjugate is

1.2.30

Multiplication by a scalar is given by

1.2.31

For matrices , and of the same dimensions,

1.2.32
1.2.33

Multiplication of Matrices

Multiplication of an matrix and an matrix , giving the matrix is defined iff . If defined, with

1.2.34

This is the row times column rule.

Assuming the indicated multiplications are defined: matrix multiplication is associative

1.2.35

distributive if and have the same dimensions

1.2.36

The transpose of the product is

1.2.37

All of the above are defined for , or square matrices of order n, note that matrix multiplication is not necessarily commutative; see §1.2(vi) for special properties of square matrices.

Row and Column Vectors

A column vector of length is an matrix

1.2.38

and the corresponding transposed row vector of length is

1.2.39

The column vector is often written as to avoid inconvenient typography. The zero vector has for .

Column vectors and of the same length have a scalar product

1.2.40

The dot product notation is reserved for the physical three-dimensional vectors of (1.6.2).

The scalar product has properties

1.2.41

for

1.2.42

and

1.2.43

if and only if .

If , , and are real the complex conjugate bars can be omitted in (1.2.40)–(1.2.42).

Two vectors and are orthogonal if

1.2.44

Vector Norms

The norm of a (real or complex) vector is

1.2.45
.

Special cases are the Euclidean length or norm

1.2.46

the norm

1.2.47

and as

1.2.48

The norm is implied unless otherwise indicated. A vector of norm unity is normalized and every non-zero vector can be normalized via .

Inequalities

If

1.2.49

we have Hölder’s Inequality

1.2.50

which for is the Cauchy-Schwartz inequality

1.2.51

the equality holding iff is a scalar (real or complex) multiple of . The triangle inequality,

1.2.52

For similar and more inequalities see §1.7(i).

§1.2(vi) Square Matrices

Square matrices (said to be of order ) dominate the use of matrices in the DLMF, and they have many special properties. Unless otherwise indicated, matrices are assumed square, of order ; and, when vectors are combined with them, these are of length .

Special Forms of Square Matrices

The identity matrix , is defined as

1.2.53

A matrix is: a diagonal matrix if

1.2.54
for ,

a real symmetric matrix if

1.2.55

an Hermitian matrix if

1.2.56

a tridiagonal matrix if

1.2.57
for .

is an upper or lower triangular matrix if all vanish for or , respectively.

Equation (3.2.7) displays a tridiagonal matrix in index form; (3.2.4) does the same for a lower triangular matrix.

Special Properties and Definitions Relating to Square Matrices

The Determinant

The matrix has a determinant, , explored further in §1.3, denoted, in full index form, as

1.2.58

where is defined by the Leibniz formula

1.2.59

is the set of all permutations of the set . See §26.13 for the terminology used herein.

The Inverse

If det() , has a unique inverse, , such that

1.2.60

A square matrix is singular if , otherwise it is non-singular. If then does not imply that ; if , then , as both sides may be multiplied by .

Linear Equations in Unknowns

Given a square matrix and a vector . If the system of linear equations in unknowns,

1.2.61

has a unique solution, . If then, depending on , there is either no solution or there are infinitely many solutions, being the sum of a particular solution of (1.2.61) and any solution of . Numerical methods and issues for solution of (1.2.61) appear in §§3.2(i) to 3.2(iii).

The Trace

The trace of is

1.2.62

Further,

1.2.63
1.2.64

and

1.2.65

The Commutator

If the matrices and are said to commute. The difference between and is the commutator denoted as

1.2.66

Norms of Square Matrices

Let the norm, and the space of all -dimensional vectors. We take , but we can also restrict ourselves to vectors and matrices with only real elements. The norm of an order square matrix, , is

1.2.67

Then

1.2.68

and

1.2.69

Eigenvectors and Eigenvalues of Square Matrices

A square matrix has an eigenvalue with corresponding eigenvector if

1.2.70

Here and may be complex even if is real. Eigenvalues are the roots of the polynomial equation

1.2.71

and for the corresponding eigenvectors one has to solve the linear system

1.2.72

Numerical methods and issues for solution of (1.2.72) appear in §§3.2(iv) to 3.2(vii).

Non-Defective Square Matrices

Nonzero vectors are linearly independent if implies that all coefficients are zero. A matrix of order is non-defective if it has linearly independent (possibly complex) eigenvectors, otherwise is called defective. Non-defective matrices are precisely the matrices which can be diagonalized via a similarity transformation of the form

1.2.73

The columns of the invertible matrix are eigenvectors of , and is a diagonal matrix with the eigenvalues as diagonal elements. The diagonal elements are not necessarily distinct, and the number of identical (degenerate) diagonal elements is the multiplicity of that specific eigenvalue. The sum of all multiplicities is .

Relation of Eigenvalues to the Determinant and Trace

For non-defective we obtain from (1.2.73) and (1.3.7)

1.2.74

Thus is the product of the (counted according to their multiplicities) eigenvalues of . Similarly, we obtain from (1.2.73) and (1.2.65)

1.2.75

Thus is the sum of the (counted according to their multiplicities) eigenvalues of .

The Matrix Exponential and the Exponential of the Trace

The matrix exponential is defined via

1.2.76

which converges, entry-wise or in norm, for all .

It follows from (1.2.73), (1.2.74) and (1.2.75) that, for a non-defective matrix ,

1.2.77

Formula (1.2.77) is more generally valid for all square matrices , not necessarily non-defective, see Hall (2015, Thm 2.12).

Morty Proxy This is a proxified and sanitized view of the page, visit original site.