- Introduction to matrices
- Adding and subtracting matrices
- Multiplying matrices
- 2 × 2 Matrices and linear transformations
- Determinants of 2 × 2 matrices
- Inverses of 2 × 2 matrices
- Invariant points and lines in 2 dimensions
- 3 × 3 Matrices and linear transformations
- Determinants of 3 × 3 matrices
- Inverses of 3 × 3 matrices
- Matrices and simultaneous equations

# Part 1: Introduction to matrices

A **matrix** is an array of elements. The elements we will see in matrices will usually be numbers or algebraic expressions. An \(m \times n\) matrix has \(m \) rows and \(n \) columns. In some books, you will find matrices written in square brackets [also known as box brackets], but here we will use round brackets (also known as parentheses). Matrices are denoted by bold, capital letters e.g. **A**.

The **order** of a matrix tells you how many rows and columns it has. Therefore, \( \begin{pmatrix} 5 & 2 & 4 \\ 1 & 8 & 2 \\\end{pmatrix}\) is simply a \(2 \times 3\) matrix.

# Part 2: Adding and subtracting matrices

You can **only** add or subtract matrices **A** and **B** if they are the same order. We can work out **A** + **B** by simply adding the corresponding elements of the matrices. We can work out **A** − **B** by subtracting each element of **B** from the corresponding element of **A**.

# Part 3: Multiplying matrices

### Multiplying a matrix by a scalar

To multiply a matrix by a scalar (a single number or algebraic expression of a number), simply multiply each element in the matrix by the scalar. Note that this fits in with our general understanding of multiplication as repeated addition.

### Multiplying a matrix by a matrix

**Before getting into the detail of multiplying a matrix by another matrix, we’ll take a look at a simple situation to help illustrate the principle behind matrix multiplication:**

A football team scores 3 points for a winning a match, 1 point for drawing, and 0 points for losing. Suppose Alton play 11 games, winning 5, drawing 2, and losing 4. They would score \(5 \times 3 + 2 \times 1 + 4 \times 0 = 17\) points. We can represent this as a matrix multiplication as follows:

\( \begin{pmatrix} 5 & 2 & 4 \\ \end{pmatrix} \times \begin{pmatrix} 3\\ 1\\ 0 \end{pmatrix} = \begin{pmatrix} 17 \end{pmatrix} \)

Suppose Belton also play 11 games, but they win 1, draw 8, and lose 2. They would score \(1 \times 3 + 8 \times 1 + 2 \times 0 = 11\) points. We can represent both teams’ results and points scores as a matrix multiplication like this:

\( \begin{pmatrix} 5 & 2 & 4 \\ 1 & 8 & 2 \\\end{pmatrix} \times \begin{pmatrix} 3\\ 1\\ 0 \end{pmatrix} = \begin{pmatrix} 17 \\ 11 \\ \end{pmatrix} \)

Now suppose we wanted to see how the total points scored by each time would differ if 4 points were awarded for a win. We can use the following matrix multiplication:

\( \begin{pmatrix} 5 & 2 & 4 \\ 1 & 8 & 2 \\\end{pmatrix} \times \begin{pmatrix} 3 & 4 \\ 1 & 1\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 17 & 22 \\ 11 & 12 \\ \end{pmatrix} \)

The activity below walks you through each of the above examples, step-by-step. It then allows you to generate random practice questions. First note the following important facts about matrix multiplication:

**It is not always possible to multiply matrices together.**It is only possible to find**A**\( \times\)**B**if the number of columns in**A**is equal to the number of rows in**B**. In other words, it is only possible to multiply an \(m \times n\) matrix by an \(n \times p\) matrix (where \(m\) and \(p\) need not be equal. The result will be an \(m \times p\) matrix.**In general, matrix multiplication is not commutative.**That is,**AB**is not always equal to**BA**.- Indeed, for the reason mentioned above, it may not even be possible to work out
**BA**even though**AB**exists. For example, a \(1 \times 2\) matrix multiplied by a \(2 \times 3\) matrix will result in a \(1 \times 3\) matrix, but it is not even possible to multiply a \(2 \times 3\) matrix by a \(1 \times 2\) matrix (because \(1\) ≠ \(3\)).

- Indeed, for the reason mentioned above, it may not even be possible to work out
**Matrix multiplication is associative, however.**In other words,**A(BC)**=**(AB)C**.

### Matrices and index notation

We can use index notation with matrices to indicate repeated multiplication. As you might expect:

**A**^{2}=**A**\( \times \)**A****A**^{3}=**A**\( \times \)**A**\( \times \)**A**

### The identity matrix

**Square matrices in the following form are known as identity matrices:**

It is often clear from context what size matrix we are dealing with, which is why you will often see references to *the* identity matrix. We denote the identity matrix using the letter **I**.

**Identity matrices are very special.** Given any square matrix **A**, you will find that:

**I**\( \times \)**A**=**A**and**A**\( \times \)**I**=**A**,

**I**is the identity matrix that is the same order (i.e. size) as

**A**.

You can think of **I** as the matrix analogue of the number 1. The number 1 is the *multiplicative identity*: when you multiply any number \(n\) by 1, your result is \(n\) i.e. it is unchanged.. Similarly, when you multiply any *suitable* matrix **M**, by I (whether before or after), your result is **M**, i.e. it is unchanged. Note however that **M** cannot be *any* old matrix; it must be a square matrix the same order as **I**.

**Activity:** Consider the matrix **M** \( \begin{pmatrix} a & b\\ c & d \\\end{pmatrix}\), and verify that **I** \( \times \) **M** = **M** and that **M** \( \times \) **I** = **M**.

# Part 4: 2 × 2 Matrices and linear transformations

A 2 × 2 matrix can be used to apply a **linear transformation** to points on a Cartesian grid. A linear transformation in two dimensions has the following properties:

- The origin (0,0) is mapped to the origin (it is
**invariant**) under the transformation - Straight lines are mapped to straight lines under the transformation
- Parallel lines remain parallel under the transformation

**Questions**

- Translation by any non-zero vector
- Rotation about the origin by any angle
- Rotation about point
*P*, by any angle greater than 0º but less than 360º, where*P*is not (0,0) - Reflection in the
*y*-axis - Reflection in the line
*x*= 0 - Reflection in the line
*y*=*mx*where*m*is a constant - Reflection in the line
*y*=*mx*+*c*where*m*and*c*are constants and*c*is non-zero - Enlargement by any non-zero scale factor, centre of enlargement (0,0)
- Enlargement by any non-zero scale factor, centre of enlargement
*P*, where*P*is not (0,0) - Enlargement by scale factor 0, centre of enlargement (0,0)

**Answers**

- Translation by any non-zero vector is
**NOT a linear transformation**because the origin is not mapped to itself. - Rotation about the origin by any angle is a
**linear transformation.** - Rotation about point
*P*, by any angle greater than 0º but less than 360º, where*P*is not (0,0) - Reflection in the
*y*-axis is a**linear transformation.** - Reflection in the line
*x*= 0 is a**linear transformation.** - Reflection in the line
*y*=*mx*where*m*is a constant is a**linear transformation.** - Reflection in the line
*y*=*mx*+*c*where*m*and*c*are constants and*c*is non-zero is**NOT a linear transformation**because the origin is not mapped to itself. - Enlargement by any non-zero scale factor, centre of enlargement (0,0) is a
**linear transformation.** - Enlargement by any non-zero scale factor, centre of enlargement
*P*, where*P*is not (0,0) is**NOT a linear transformation**because the origin is not mapped to itself. - Enlargement by scale factor 0, centre of enlargement (0,0) is
**NOT a linear transformation**because straight lines aren’t mapped to straight lines; in fact every point on the grid is mapped to (0,0).

### The effect of a 2 × 2 transformation matrix

To find where the matrix **M** \(\begin{pmatrix} a & b\\c & d\end{pmatrix}\) maps the point *Q* with coordinates \((x, y)\), we multiply the matrix **M** by the position vector representation of *Q*:

i.e. we do \(\begin{pmatrix} a & b\\c & d\end{pmatrix} \begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} x’\\y’\end{pmatrix}\), and Q is mapped to \((x’, y’)\).

For example, the matrix \(\begin{pmatrix} 2 & 1\\-1 & 3\end{pmatrix}\) maps \((1, 1)\) to \(\begin{pmatrix} 2 & 1\\-1 & 3\end{pmatrix} \begin{pmatrix} 1\\1\end{pmatrix} = \begin{pmatrix} 3\\2\end{pmatrix}\) or the point \((3, 2)\).

In the following applet, we will take a look at the effect of various transformations on the unit square OPQR:

- Click on “
**Custom**” towards the top of the applet in order to apply custom transformations to the unit square. Drag the blue slider fully to the right and tick the box to show the basis vectors. Then vary \(a\) and see the impact this has on the basis vectors. Then try varying \(b\), \(c\), and \(d\) (one at a time) to see the impact of varying these. - Demonstrate how the columns of the transformation matrix correspond to the transformations of two sides of the unit square given.
- Drag the blue slider fully to the
**left**. Tick the boxes to show the basis vectors and to transform the gridlines too. Now drag the blue slider to the right. Note that*on the transformed grid*, the coordinates of the transformed shape are still at (0,0), (1,0), (1,1) and (0,1). The basis vectors*in terms of the untransformed grid*are however given by \(\begin{pmatrix} a\\c \end{pmatrix}\) and \(\begin{pmatrix} b\\d \end{pmatrix}\). This can be seen by**unticking**the “Transform gridlines too” box while the blue slider is fully dragged to the right.

### Deducing transformation matrices for common transformations

In the applet above, the point P has position vector \(\begin{pmatrix} 1\\0\end{pmatrix}\) and the point R has position vector \(\begin{pmatrix} 0\\1\end{pmatrix}\). The transformation matrix \(\begin{pmatrix} a & b\\c & d\end{pmatrix}\) maps P to \(\begin{pmatrix} a\\c\end{pmatrix}\) and R to P to \(\begin{pmatrix} b\\d\end{pmatrix}\).

You can verify these by working out \(\begin{pmatrix} a & b\\c & d\end{pmatrix} \times \begin{pmatrix} 1\\0\end{pmatrix}\) and \(\begin{pmatrix} a & b\\c & d\end{pmatrix} \times \begin{pmatrix} 0\\1\end{pmatrix}\)respectively.

By visualising the unit square—in particular how a transformation affects the points P and Q—we can work backwards to quickly deduce the matrices representing many common transformations. For example, a rotation 90º anticlockwise about \((0,0)\) maps P to P’, with position vector \(\begin{pmatrix} 0\\1\end{pmatrix}\), and it maps R to R’ with position vector \(\begin{pmatrix} -1\\0\end{pmatrix}\). Therefore, the matrix representing this transformation is \(\begin{pmatrix} 0 & -1\\1 & 0\end{pmatrix}\).

### Summary of transformation matrices that you should learn or be able to deduce quickly

Reflection in the \(x\)-axis: \(\begin{pmatrix} 1 & 0\\0 & -1\end{pmatrix}\)

Reflection in the \(y\)-axis: \(\begin{pmatrix} -1 & 0\\0 & 1\end{pmatrix}\)

Reflection in the \(y=x\): \(\begin{pmatrix} 0 & 1\\1 & 0\end{pmatrix}\)

Reflection in the \(y=-x\): \(\begin{pmatrix} 0 & -1\\-1 & 0\end{pmatrix}\)

Enlargement by scale factor \(k\), centre at \((0,0)\): \(\begin{pmatrix} k & 0\\0 & k\end{pmatrix}\)

Rotation 90º anticlockwise about \((0,0)\): \(\begin{pmatrix} 0 & -1\\1 & 0\end{pmatrix}\)

Rotation 180º \((0,0)\): \(\begin{pmatrix} -1 & 0\\0 & -1\end{pmatrix}\)

Rotation 270º anticlockwise about \((0,0)\): \(\begin{pmatrix} 0 & 1\\-1 & 0\end{pmatrix}\)

Rotation \(\theta\)º anticlockwise about \((0,0)\): \(\begin{pmatrix} \text{cos} \theta & -\text{sin} \theta\\ \text{sin} \theta & \text{cos} \theta \end{pmatrix}\)

Shear in the \(x\)-direction, shear factor \(k\): \(\begin{pmatrix} 1 & k\\0 & 1\end{pmatrix}\)

Shear in the \(y\)-direction, shear factor \(k\): \(\begin{pmatrix} 1 & 0\\k & 1\end{pmatrix}\)

# Part 5: Determinants of 2 × 2 matrices

### Calculating the determinant

The **determinant** of a 2 × 2 matrix **M** is written det **M** or |**M**|.

For a 2 × 2 matrix \(\begin{pmatrix} a & b\\c & d\end{pmatrix}\), the determinant can be written det\(\begin{pmatrix} a & b\\c & d\end{pmatrix}\) or \(\begin{vmatrix} a & b\\c & d\end{vmatrix}\) and is simply equal to \(ad – bc\).

Note: Determinants can only be found for square matrices. There is a general method for working out the determinant of an \(n \) × \(n \) matrix, described in Part 9 below. At that stage, you can check that the general method applied to a 2 × 2 matrix gives you the determinant \(ad – bc\).

### What the determinant represents

The absolute value of the determinant of a 2 × 2 matrix **M** is equal to the area scale factor by which **M** transforms the areas of shapes. In particular, consider the parallelogram obtained by transforming the unit square. The unit square has area 1, so the parallelogram will have an area of |**M**|.

If the determinant is negative, it simply indicates a change of orientation. The vertices of the unit square are O, P, Q, and R going anticlockwise. If the vertices of the image O’, P’, Q’, and R’ also run anticlockwise, then the determinant is positive. If these vertices run clockwise i.e. the orientation has changed, this means that the determinant is negative.

**singular**matrix.

**non-singular**.

# Part 6: Inverses of 2 × 2 matrices

Given two matrices **A** and **B**, if **AB** = **I**, the identity matrix, then **B** is the inverse of **A**. We can denote the inverse of **A** as **A ^{-1}** i.e.

**B**=

**A**.

^{-1}A square matrix **M** has an inverse, denoted **M ^{-1}**, if and only if its |

**M**| ≠ 0. If the determinant of

**M**is 0, then

**M**has no inverse.

Given a matrix **M** and its inverse, **M ^{-1}**, the following will be true:

**MM**=^{-1}**I****M**=^{-1}M**I**

The inverse of a 2 × 2 matrix \(\textsf{M}\) = \(\begin{pmatrix} a & b\\c & d\end{pmatrix}\) can be found (where it exists) as follows:

\(\textsf{M}^{-1}\) = \(\dfrac{1}{\begin{vmatrix} \textsf{M} \end{vmatrix}} \begin{pmatrix} d & -b\\-c & a\end{pmatrix}\) = \(\dfrac{1}{ad-bc} \begin{pmatrix} d & -b\\-c & a\end{pmatrix}\).

**Extension 1:** Verify that **MM**^{-1} = **I**.

**Extension 2:** Verify that **M**^{-1}**M** = **I**.

**Extension 3:** Find the determinant of **M**^{-1}.

**Questions**

- Can you see why (at least for a 2 \(\times\) 2 matrix), that an inverse can’t exist if the matrix has determinant 0?
- Show that \(|\textsf{M}^{-1}| = \dfrac{1}{\textsf{|M|}}\).

**Answers**

- \(\textsf{M}^{-1} = \dfrac{1}{\begin{vmatrix} \textsf{M} \end{vmatrix}} \begin{pmatrix} d & -b\\-c & a\end{pmatrix}\), so if \(\textsf{|M|} = 0\), the inverse would be \(\dfrac{1}{0} \begin{pmatrix} d & -b\\-c & a\end{pmatrix}\), which is a problem because \(\dfrac{1}{0}\) is not defined.
- If \(\textsf{M}\) = \(\begin{pmatrix} a & b\\c & d\end{pmatrix}\), then \(\textsf{M}^{-1}\) = \(\dfrac{1}{\begin{vmatrix} \textsf{M} \end{vmatrix}} \begin{pmatrix} d & -b\\-c & a\end{pmatrix} = \begin{pmatrix} \dfrac{d}{\textsf{|M|}} & \dfrac{-b}{\textsf{|M|}}\\\dfrac{-c}{\textsf{|M|}} & \dfrac{a}{\textsf{|M|}}\end{pmatrix}\).
So \(|\textsf{M}^{-1}| = \dfrac{ad}{\textsf{|M|}^{2}} – \dfrac{(-b)(-c)}{\textsf{|M|}^{2}}=\dfrac{ad-bc}{\textsf{|M|}^{2}}=\dfrac{\textsf{|M|}}{\textsf{|M|}^{2}}=\dfrac{1}{\textsf{|M|}}\).

# Part 7: Invariant points and lines in 2 dimensions

An **invariant point** under a transformation is a point that maps to itself. As noted in part 4, linear transformations map the origin to the origin, so the origin is always an invariant point under a linear transformation.

An **invariant line** is a line that maps to itself. To be precise, every point on the invariant line maps to a point on the line itself. Note that the point needn’t map to itself.

A **a line of invariant points** is a line where every point every point on the line maps to itself. Any line of invariant points is therefore an invariant line, but an invariant line is not necessarily always a line of invariant points.

Use this applet to see invariant points, invariant lines, and lines of invariant points for three examples of linear transformations.

# Part 8: 3 × 3 matrices and linear transformations

3 × 3 matrices can be used to apply transformations in 3D, just as we used 2 × 2 matrices in 2D. To find where the matrix **M** \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix}\) maps the point *Q* with coordinates \((x, y, z)\), we multiply the matrix **M** by the position vector representation of *Q*:

i.e. we do \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix} \begin{pmatrix} x\\y\\z\end{pmatrix} = \begin{pmatrix} x’\\y’\\z’\end{pmatrix}\), and Q is mapped to \((x’, y’,z’)\).

For example, the matrix \(\begin{pmatrix} 2 & 1 & 0\\-1 & 3 & 0\\0 & 0 & 4\end{pmatrix}\) maps \((1, 1, 1)\) to \(\begin{pmatrix} 2 & 1 & 0\\-1 & 3 & 0\\0 & 0 & 4\end{pmatrix} \begin{pmatrix} 1\\1\\1\end{pmatrix} = \begin{pmatrix} 3\\2\\4\end{pmatrix}\) or the point \((3, 2, 4)\).

In the following applet, we will take a look at the effect of various transformations on the unit cube:

### Deducing transformation matrices for common transformations

The transformation matrix \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix}\) maps \(\begin{pmatrix} 1\\0\\0\end{pmatrix}\) to \(\begin{pmatrix} a_{11}\\a_{21}\\a_{31}\end{pmatrix}\), \(\begin{pmatrix} 0\\1\\0\end{pmatrix}\) to \(\begin{pmatrix} a_{12}\\a_{22}\\a_{32}\end{pmatrix}\), and \(\begin{pmatrix} 0\\0\\1\end{pmatrix}\) to \(\begin{pmatrix} a_{13}\\a_{23}\\a_{33}\end{pmatrix}\).

You can verify these by working out \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix} \times \begin{pmatrix} 1\\0\\0\end{pmatrix}\), \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix} \times \begin{pmatrix} 0\\1\\0\end{pmatrix}\), \(\begin{pmatrix} a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\\a_{31} & a_{32} & a_{33}\end{pmatrix} \times \begin{pmatrix} 0\\0\\1\end{pmatrix}\) and respectively.

By visualising the unit cube—in particular how a transformation affects the points with position vectors \(\begin{pmatrix} 1\\0\\0\end{pmatrix}\), \(\begin{pmatrix} 0\\1\\0\end{pmatrix}\), and \(\begin{pmatrix} 0\\0\\1\end{pmatrix}\)—we can work backwards to quickly deduce the matrices representing many common transformations. For example, a rotation 90º anticlockwise about the \(z\)-axis maps \(\begin{pmatrix} 1\\0\\0\end{pmatrix}\) to \(\begin{pmatrix} 0\\1\\0\end{pmatrix}\), \(\begin{pmatrix} 0\\1\\0\end{pmatrix}\) to \(\begin{pmatrix} -1\\0\\0\end{pmatrix}\), and \(\begin{pmatrix} 0\\0\\1\end{pmatrix}\) to itself. Therefore, the matrix representing this transformation is \(\begin{pmatrix} 0 & -1 & 0\\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}\).

### Summary of transformation matrices that you should learn or be able to deduce quickly

Reflection in \(x=0\) (the \(y\)-\(z\)-plane): \(\begin{pmatrix} 1 & 0\\0 & -1\end{pmatrix}\)

Reflection in \(y=0\) (the \(x\)-\(z\)-plane): \(\begin{pmatrix} -1 & 0\\0 & 1\end{pmatrix}\)

Reflection in \(z=0\) (the \(x\)-\(y\)-plane): \(\begin{pmatrix} 0 & 1\\1 & 0\end{pmatrix}\)

Enlargement by scale factor \(k\), centre at \((0,0,0)\): \(\begin{pmatrix} k & 0\\0 & k\end{pmatrix}\)

Rotation \(\theta\)º anticlockwise about the \((x\)-axis: \(\begin{pmatrix} 1 & 0 & 0\\ 0 & \text{cos} \theta & -\text{sin} \theta\\ 0 & \text{sin} \theta & \text{cos} \theta \end{pmatrix}\)

Rotation \(\theta\)º anticlockwise about the \((y\)-axis: \(\begin{pmatrix} \text{cos} \theta & 0& \text{sin} \theta\\ 0 & 1 & 0\\ -\text{sin} \theta & 0 & \text{cos} \theta \end{pmatrix}\)

Rotation \(\theta\)º anticlockwise about the \((z\)-axis: \(\begin{pmatrix} \text{cos} \theta & -\text{sin} \theta & 0\\ \text{sin} \theta & \text{cos} \theta & 0 \\ 0 & 0 & 1 \end{pmatrix}\)

# Part 9: Determinants of 3 × 3 matrices

### Minors and cofactors

Before we can find the determinant of a 3 × 3 (or larger) square matrix, we need to learn some new terminology.

Each element of a square matrix has a **minor**. The minor of the element is found by removing the row and column containing that element, and calculating the determinant of the remaining matrix.

Each element of a square matrix also has a **cofactor**. The cofactor of the element is either equal to its minor multiplied by 1, or its minor multiplied by -1. In other words, it is either the minor itself or the minor with its sign changed. We decide whether to keep or change the sign as follws: the cofactor of the element in the \(i\text{th}\) row and \(j\text{th}\) column is the minor of the element multiplied by \((-1)^{i+j}\). In other words, if \(i+j\) is even, we keep the minor’s sign the same, and if \(i+j\) is odd, we change its sign. For a 3 × 3 matrix, the following shows how each element’s cofactor is related to its minor:

This applet guides you through the process, step-by-step:

### Finding the determinant

To find the determinant of a matrix, we need to find the cofactors of all of the elements in just one row *or* column This means that we can find the determinant of a 3 × 3 matrix without needing to find the cofactors of all nine elements.

To find the determinant, pick a row or column. For that row or column, multiply each element by its cofactor and note the product, and finally add these products.

Whichever row or column you choose, you should get the same answer, but it will save time if you choose a row or column with zeroes in it, if possible. If you have a 0 element, then when you multiply this by its cofactor, you will get 0. This means that you can save time by not bothering to find that element’s cofactor.

Use this applet to practise finding the determinant. To start with, it would be good to repeat each question using a different row or column to verify that you always get the same answer.

### What the determinant represents

In part 5, we saw that the determinant of a 2 × 2 matrix **M** is equal to the area scale factor by which **M** transforms the areas of shapes. The determinant of a 3 × 3 matrix **M** is equal to the volume scale factor by which **M** transforms the volume of shapes. (We can also extend this idea to higher dimensions, though it is very hard to visualise object (let alone transformations of such objects) beyond the third dimension!)

Remember, a matrix with determinant is zero is called a **singular** matrix. Singular 3 × 3 matrices map the unit cube to either a plane, or a line or to the point (0,0,0) in the case of a zero matrix. A matrix with non-zero determinant is called **non-singular**.

# Part 10: Inverses of 3 × 3 matrices

To find the inverse, **M**^{-1}, of a 3 × 3 matrix **M** (if **M**^{-1} exists), we first need to find the cofactor matrix of **M**, which is the matrix made up of the 9 cofactors of each element of **M**. We first came across cofactors in part 9.

We also need to be able to find the **transpose** of a matrix. We can obtain the transpose of a matrix by writing its rows as its columns and vice versa. This is equivalent to reflecting its elements along its diagonal (from top-left to bottom-right). Here is an example:

If **A** \( = \begin{pmatrix} 4 & 5 & -7\\ 2 & -3 & 0 \\ 1 & -6 & 8 \\ \end{pmatrix}\), the the transpose of **A**, denoted **A**^{T} is \(\begin{pmatrix} 4 & 2 & 1\\ 5 & -3 & -6 \\ -7 & 0 & 8 \\ \end{pmatrix}\).

If **M** has cofactor matrix **C** and is non-singular, then **M**^{-1}\(=\frac{1}{\text{det }\textbf{M}}\)**C**^{T}.
Use this applet to practise finding the inverse of 3 × 3 matrices.

# Part 11: Matrices and simultaneous equations

### Solving simultaneous equations

We can use our knowledge of matrix multiplication and inverse matrices to solve simultaneous equations. For example, consider this pair of simultaneous equations:

\(4x + 5y = 37\\2x + 3y = 19\)

These can be rewritten as a product of a \(\color{red}{\text{coefficient matrix}}\) and a \(\color{green}{\text{column vector}}\) \(\color{red}{ \begin{pmatrix} 4 & 5\\ 2 & 3 \\\end{pmatrix}}\color{green}{\begin{pmatrix} x\\ y \\\end{pmatrix}}=\begin{pmatrix} 37\\ 19 \\\end{pmatrix}\)

Remember, *solving* the pair of simultaneous equations in the above case means finding the *values* of \(x\) and \(y\) that satisfy the equations. We can do this by left-multiplying both sides of the matrix equation by the inverse of \(\color{red}{\begin{pmatrix} 4 & 5\\ 2 & 3 \\\end{pmatrix}}\), which is \(\color{blue}{\frac{1}{2}\begin{pmatrix} 3 & -5\\ -2 & 4 \\\end{pmatrix}}\):

\(\color{blue}{\frac{1}{2}\begin{pmatrix} 3 & -5\\ -2 & 4 \\\end{pmatrix}}\color{red}{\begin{pmatrix} 4 & 5\\ 2 & 3 \\\end{pmatrix}}\color{green}{\begin{pmatrix} x\\ y \\\end{pmatrix}}=\color{blue}{ \frac{1}{2}\begin{pmatrix} 3 & -5\\ -2 & 4 \\\end{pmatrix}} \begin{pmatrix} 37\\ 19 \\\end{pmatrix}\)

Since the red and blue are inverses (and therefore multiply to give us the identity matrix), this simplifies to:

\(\color{green}{\begin{pmatrix} x\\ y \\\end{pmatrix}}=\color{blue}{ \frac{1}{2}\begin{pmatrix} 3 & -5\\ -2 & 4 \\\end{pmatrix}} \begin{pmatrix} 37\\ 19 \\\end{pmatrix}\)Multiplying the right-hand side, we find:

\(\color{green}{\begin{pmatrix} x\\ y \\\end{pmatrix}}=\begin{pmatrix} 8\\ 1 \\\end{pmatrix}\)

### Graphical visualisations

In part 6, we were introduced to the idea that not all matrices have inverses. For the matrix **M** to have an inverse, we need |**M**| ≠ 0 i.e. we need **M** to be non-singular. When the coefficient matrix is non-singular, we can find a unique solution to the set of linear simultaneous equations. A non-singular \(2 \times 2\) matrix corresponds to a pair of equations of lines that intersect at exactly one point, with this point defining the solution:

A non-singular \(3 \times 3\) matrix corresponds to a set of three planes that intersect at exactly one point as shown. It might be easier to appreciate how three planes can intersect at this point by starting with just one plane (the red one), and then introducing the next two planes one at a time:

Note that you can click and drag in the applet to rotate your view.

#### What if the determinant is 0?

If the coefficient matrix has a determinant of 0, this will either be because the simultaneous equations have no solutions or an infinite number solutions. When there are **no solutions**, we say that the set of simultaneous equations is **inconsistent**. (Note that when there are infinite solutions, the equations are consistent, so a coefficient matrix with a determinant of 0 does not necessarily imply that the set of simultaneous equations is inconsistent.) The following applets graphically illustrate the scenarios (in 2D and 3d) in which a coefficient matrix can have a determinant of 0:

##### 2\(\times\)2 matrices

##### 3\(\times\)3 matrices

Click each scenario to see a graphical illustration. Note that you can click and drag in the applet to rotate your view.