# Matrices

Contents

- 0.1 Matrix
- 0.2 Square matrix
- 0.3 Diagonal matrix
- 0.4 Row matrix
- 0.5 Column matrix
- 0.6 Matrices of the same kind
- 0.7 The transposed matrix of a matrix
- 0.8 0-matrix
- 0.9 An identity matrix I
- 0.10 A scalar matrix S
- 0.11 The opposite matrix of a matrix
- 0.12 A symmetric matrix
- 0.13 A skew-symmetric matrix
- 1 The sum of matrices of the same kind
- 2 Scalar multiplication
- 3 Sums in math
- 4 Multiplication of a row matrix by a column matrix
- 5 Multiplication of two matrices A.B
- 6 Properties of multiplication of matrices

### Matrix

A matrix is an ordered set of numbers listed rectangular form.

Example. Let A denote the matrix

```
[2 5 7 8]
[5 6 8 9]
[3 9 0 1]
```

This matrix A has three rows and four columns. We say it is a 3 x 4 matrix.

We denote the element on the second row and fourth column with a_{2,4}.

### Square matrix

If a matrix A has n rows and n columns then we say it’s a square matrix.

In a square matrix the elements a_{i,i} , with i = 1,2,3,… , are called diagonal elements.

Remark. There is no difference between a 1 x 1 matrix and an ordinary number.

### Diagonal matrix

A diagonal matrix is a square matrix with all de non-diagonal elements 0.

The diagonal matrix is completely defined by the diagonal elements.

Example.

```
[7 0 0]
[0 5 0]
[0 0 6]
The matrix is denoted by diag(7 , 5 , 6)
```

### Row matrix

A matrix with one row is called a row matrix.

[2 5 -1 5]

### Column matrix

A matrix with one column is called a column matrix.

```
[2]
[4]
[3]
[0]
```

### Matrices of the same kind

Matrix A and B are of the same kind if and only if

A has as many rows as B and A has as many columns as B

```
[7 1 2] [4 0 3]
[0 5 6] and [1 1 4]
[3 4 6] [8 6 2]
```

### The transposed matrix of a matrix

The n x m matrix B is the transposed matrix of the m x n matrix A if and only if

The ith row of A = the ith column of B for (i = 1,2,3,..m)

So a_{i,j} = b_{j,i}

```
The transposed matrix of A is denoted T(A) or A
```^{T}
T
[7 1 ] [7 0 3]
[0 5 ] = [1 5 4]
[3 4 ]

### 0-matrix

When all the elements of a matrix A are 0, we call A a 0-matrix.

We write shortly 0 for a 0-matrix.

### An identity matrix I

An identity matrix I is a diagonal matrix with all the diagonal elements = 1.

```
[1]
[1 0]
[0 1]
[1 0 0]
[0 1 0]
[0 0 1]
...
```

### A scalar matrix S

A scalar matrix S is a diagonal matrix whose diagonal elements all contain the same scalar value.

a_{1,1} = a_{i,i} for (i = 1,2,3,..n)

```
[7 0 0]
[0 7 0]
[0 0 7]
```

### The opposite matrix of a matrix

If we change the sign of all the elements of a matrix A, we have the opposite matrix -A.

If A’ is the opposite of A then a_{i,j}‘ = -a_{i,j}, for all i and j.

### A symmetric matrix

A square matrix is called symmetric if it is equal to its transpose.

Then a_{i,j} = a_{j,i} , for all i and j.

```
[7 1 5]
[1 3 0]
[5 0 7]
```

### A skew-symmetric matrix

A square matrix is called skew-symmetric if it is equal to the opposite of its transpose.

Then a_{i,j} = -a_{j,i} , for all i and j.

```
[ 0 1 -5]
[-1 0 0]
[ 5 0 0]
```

## The sum of matrices of the same kind

### Sum of matrices

To add two matrices of the same kind, we simply add the corresponding elements.

### Sum properties

Consider the set S of all n x m matrices (n and m fixed) and A and B are in S.

From the properties of real numbers it’s immediate that

- A + B is in S
- the addition of matrices is associative in S
- A + 0 = A = 0 + A
- with each A corresponds an opposite matrix -A
- A + B = B + A

## Scalar multiplication

### Definition

To multiply a matrix with a real number, we multiply each element with this number.

### Properties

Consider the set S of all n x m matrices (n and m fixed). A and B are in S; r and s are real numbers.

It is not difficult to see that:

```
r(A+B) = rA+rB
(r+s)A = rA+sA
(rs)A = r(sA)
(rA)
```^{T} = r. A^{T}

## Sums in math

Because in the following, there is an intensive use of the properties of sums, the reader who is not familiar with these properties must read first Sums in math .

Remark. In this html document, for convenience, we’ll write the word sum instead of the sigma sign.

## Multiplication of a row matrix by a column matrix

This multiplication is only possible if the row matrix and the column matrix have the same number of elements. The result is a ordinary number ( 1 x 1 matrix).

To multiply the row by the column, you have to multiply all the corresponding elements, then make the sum of the results.

Example.

```
[1]
[2 1 3]. [2] = [19]
[5]
```

## Multiplication of two matrices A.B

This product is defined only if A is a (l x m) matrix and B is a (m x n) matrix.

So the number of columns of A has to be equal to the number of rows of B.

The product C = A.B then is a (l x n) matrix.

The element of the i-th row and the j-th column of the product is found by multiplying the ith row of A by the jth column of B.

```
c
```_{i,j} = sum_{k} (a_{i,k}.b_{k,j})

Examples.

```
[1 2][1 3] = [5 7]
[2 1][2 2] [4 8]
[1 3][1 2] = [7 5]
[2 2][2 1] [6 6]
[1 1][2 2] = [0 0]
[1 1][-2 -2] [0 0]
[ 1, 3, 2 ] [ 3, -1, 4 ] [ 1, 16, 5 ]
[ 4, 5, 3 ] [ -2, 3, 1 ] = [ 8, 23, 18 ]
[ 2, 2, 1 ] [ 2, 4, -1 ] [ 4, 8, 9 ]
```

From these examples we see that the product is not commutative and that there are zero divisors. Zero divisors are matrices different from a zero matrix, such that the product is a zero matrix.

Application

A matrix A is called idempotent if and only if A^{2} = A.

Given:

```
[1 b c]
A = [0 0 2]
[0 0 1]
```

Find the set of all 3 x 3 matrices of type A such that A is idempotent.

Solution:

We calculate A^{2}.

```
[1 b 2c+2b]
[0 0 2 ]
[0 0 1 ]
A
```^{2} = A
<=>
2c + 2b = c
<=>
c = -2b

All requested matrices are

```
[1 b -2b]
[0 0 2 ] with b in
```**R**
[0 0 1 ]

## Properties of multiplication of matrices

### Associativity

If the multiplication is defined then A(B.C) = (A.B)C holds for all matrices A,B and C.

Proof:

We’ll show that an element of A(B.C) is equal to the corresponding element of (A.B)C

First we calculate the element of the ith row and jth column of A(B.C)

```
Let D denote B.C, then
d
```_{k,j} = sum_{p} b_{k,p}.c_{p,j} (1)
Let E denote A.D then
e_{i,j} = sum_{k} a_{i,k}.d_{k,j} (2)
(1) in (2) gives
e_{i,j} = sum_{k} a_{i,k}.(sum_{p} b_{k,p}.c_{p,j})
<=> e_{i,j} = sum_{k,p} a_{i,k}.b_{k,p}.c_{p,j}
So the element of the ith row and jth column of A(B.C) is
sum_{k,p} a_{i,k}.b_{k,p}.c_{p,j} (3)

Now we calculate the element of the ith row and jth column of (A.B)C

```
Let D' denote A.B, then
d
```_{i,p}' = sum_{k} a_{i,k}.b_{k,p} (4)
Let E' denote D'C then
e_{i,j}' = sum_{p} d_{i,p}'.c_{p,j} (5)
(4) in (5) gives
e_{i,j}' = sum_{p} (sum_{k} a_{i,k}.b_{k,p}).c_{p,j}
<=> e_{i,j}' = sum_{k,p} a_{i,k}.b_{k,p}.c_{p,j}
So the element of the ith row and jth column of (A.B)C is
sum_{k,p} a_{i,k}.b_{k,p}.c_{p,j} (6)
From (3) and (6) => A(B.C) = (A.B)C

### Distributivity

If the multiplication is defined then A(B+C) = A.B+A.C and (A+B).C = A.C+B.C hold for all matrices A,B and C. This theorem can be proved in the same way as above.

### Theorem 1

For each A, there is always an identity matrix E and an identity matrix E’ so that A.E = A and E’.A = A.

If A is a square matrix then E = E’.

### Theorem 2

```
(A.B)
```^{T} = B^{T} .A^{T}

This theorem can be proved in the same way as above.

Example :

If we transpose the product

```
[ 2 4 ] [x]
[ 3 8 ] [y]
```

we get

```
[x y ] [ 2 3 ]
[ 4 8 ]
```

### Theorem 3

If the multiplication is defined then we have for any A

```
A.0 = 0 = 0.A
```

### Theorem 4

r and s are real numbers and A , B are matrices. If the multiplication is defined then (rA)(sB) = (rs)(AB) This theorem can be proved in the same way as above.

### Theorem 5

```
If D = diag(a,b,c) then D.D = diag( a
```^{2} , b^{2} , c^{2})
D.D.D = diag( a^{3} , b^{3} , c^{3})
.....

This property can be generalized for D = diag(a,b,c,d,e,…,l).