In [4]:
import numpy as np

Week 1 - System of linear equations

Geometric notion of singularity

Systems of equations as lines:

Terms:

  1. Complete, non-singular - unique solution (intersection)
  2. Redundant, singular - infinite solutions (overlapping line)
  3. Contradictory, singular - no solutions (parallel)

Systems of equations as matrices

Linear depenence and independence

Takeaway:

  • Linearly dependent if all rows can be linearly transformed
  • Independent if all rows can't be linearly transformed

The Determinant

Determinant: formula that returns a 0 if the matrix is singular and a number different from 0 if the matrix is non singular

Example problems, and whether singular/nonsingular:

In 3x3 matrix

You can solve for a bigger matrix by just considering the variables. Constants don't matter for singularity.

Linear depenence works the same

  • Linear products of rows = linearly independent
  • Average of row x and row y to make row z = linearly dependent
  • No relations between rows == linearly independent

Examples of averaging rows to show linear dependence:

Quiz: Depedence and singularity

You can also apply determinants to a 3x3 matrix:

  • You need to do a wrap around to complete the diagonals

  • Answer: Upper row is 6, lower row is 5, determinant is 1. Thus, it is non-singular.
In [5]:
# Calculations to solve week 1 quiz

matrix = np.array([[2,1,5], [1,2,1], [2,1,3]])
print(np.linalg.det(matrix))
vec = np.array([20, 10, 15])
print(np.linalg.solve(matrix, vec))


m2 = np.array([[1,2,3], [0,2,2], [1,4,5]])
print(np.linalg.det(m2))
-6.0
[2.5 2.5 2.5]
0.0

Week 2- Solving System of Linear Equations

Solving non-singular system of linear equations

Solving singular system of linear equations

Takeaway: Solved solution shows that there are infinitely many solutions, a degree of freedom $x$

  • Works for contradictions as well

Solving systems of equations with more variables

Takeaway: Eliminate one variable, then solve others, then propagate back to equation with first variable

Matrix row-reduction (Gaussian elimination)

Mapping solving equations to a matrix form:

Row echelon form: The general way that a matrix in row echelon form looks like is the following.

  • On the main diagonal, we have a bunch of ones followed by perhaps a bunch of zeros. You could potentially have all ones, but you could also have all zeros.
  • Below the diagonal, everything is a zero, to the right of the ones any number is allowed and finally to the right of the zeros, everything must be zero.
  • Following these rules in the case of 2 by 2 matrices, only three things may happen.
    1. You have two 1s in the diagonal.
    2. You have 1, 1 in the diagonal
    3. You have 0, 1s in the diagonal.

Row operations that preserve singularity

Checking for singularity after an operation can be done by checking its determinant.

Swapping rows maintain singularity:

Multiplying rows by a non-zero scalar maintains singularity:

Adding a row to another row maintains singularity:

In [6]:
# Calculations to solve quiz 3

matrix = np.array([[-3, 8, 1], [2, 2, -1], [-5, 6, 2]])
print(np.linalg.det(matrix))
-6.106226635438351e-15

Row Echelon Form and Rank

Rank of a matrix

One way to determine rank is to see how many novel equations there are for each row. Rank is the information inherent in the matrix

But there is an easier way to do this - using row echelon form.

Row echelon form of a matrix

How to find row echelon form (REF) in a 2x2 matrix:

Interestingly, the rank of a matrix is the number of ones found in the diagonal cells:

How to find it with a 3x3 matrix. Here's how they are represented in equations and via matrix:

How to determine REF for a larger matrix ("*" are any zero or non-zero number):

These correspond to the rank of the matrix:

Reduced row echelon form

Recall how we calculated the reduced row echolon form:

Rules of reduced echelon form:

How to get to the reduced REF for a large matrix:

Detailed example of how we get to the reduced REF:

In [7]:
## Quiz 3 code

arr_1 = np.array([[7,5,3],[3,2,5],[1,2,1]])
arr_2 = np.array([120, 70, 20])

# Solve system of equations
print(np.linalg.solve(arr_1, arr_2))

# Find its determinant
print(np.linalg.det(arr_1))

# What is its rank
print(np.linalg.matrix_rank(arr_1))
[15.  0.  5.]
-34.00000000000001
3

Week 3 - Vectors and Linear Transformations

Vector distances

How to calculate vector distance (L1, L2, cosine):

Multiplying a vector by a scalar

Vector is multplied element-wise:

Dot Product

Dot product of 2 vectors:

Clearer diagram of order of operations:

The L2 norm is always the square root of the dot product between the vector and itself.

  • Angled brackets express a dot product

Geometric Vectors

  • Orthogonal vectors always have a dot product of 0:
  • Any vectors on the side of one vector will have a positive dot product
  • Any vectors on the other side will have a negative dot product

Dot product, given $u$ and $v$ is the product of their absolute values times cosine of theta:

Multiplying a matrix by a vector

Think of equations as dot products by uniting them as a vector:

Matrices as linear transformations

In [8]:
# Done in numpy
a1 = np.array([[2, -1], [0, 2]])
a2 = np.array([[3, 1],[1, 2]])
np.matmul(a1, a2)
Out[8]:
array([[5, 0],
       [2, 4]])

Note on condition that must be met to matmul

  • Mathematically, matrix multiplication is defined only if
    • number of the columns of matrix 𝐴 is equal to
    • number of the rows of matrix 𝐵.

The Identity Matrix

  • The identity matrix is the matrix that when multiplied by any other matrix it gives the same matrix and its corresponding linear transformation is very simple. It is the one that leaves the plane intact."
  • Identity matrices have 1s in the diagonals and 0s everywhere else.
  • Multiplying it with any vector leaves the vector intact
In [9]:
# Example:
a1 = np.identity(10)
print("Identity matrix\n", a1)

a2 = np.array(np.arange(1,11), dtype="float32")
print("a2 vector\n", a2)

print("matrix multiplication of identity matrix and a2\n", np.matmul(a1,a2))
Identity matrix
 [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
a2 vector
 [ 1.  2.  3.  4.  5.  6.  7.  8.  9. 10.]
matrix multiplication of identity matrix and a2
 [ 1.  2.  3.  4.  5.  6.  7.  8.  9. 10.]

Matrix inverse

  • The inverse matrix is a matrix for which the product of the matrices is the identity matrix.
  • In a linear transformation, the inverse matrix is the one that undoes the job of the original matrix, namely the one that returns the plane to where it was at the beginning.

How to find an inverse matrix:

  • The inverse matrix is the original matrix to the power of -1 (ie. $matrix^{-1}$)
  • We find it by solving a system of linear equations:
    1. Dot product of row 1 and column 1
    2. Dot product of row 1 and column 2
    3. Dot product of row 2 and column 1
    4. Dot product of row 2 and column 2

Note that some matrix will not an inverse, like $[[1,1],[2,2]]$

In [10]:
# Calculate matrix inverse with numpy
from numpy.linalg import LinAlgError

a1 = np.array([[3,1],[1,2]])
print("a1\n", a1)
print("inverse:", np.linalg.inv(a1))

a2 = np.array([[5,2],[1,2]])
print("a2\n", a2)
print("inverse:", np.linalg.inv(a2))

a3 = np.array([[1,1],[2,2]])
print("a3\n", a3)
try:
    np.linalg.inv(a3)
except LinAlgError:
    print("Can't solve, it's a singular matrix!")
a1
 [[3 1]
 [1 2]]
inverse: [[ 0.4 -0.2]
 [-0.2  0.6]]
a2
 [[5 2]
 [1 2]]
inverse: [[ 0.25  -0.25 ]
 [-0.125  0.625]]
a3
 [[1 1]
 [2 2]]
Can't solve, it's a singular matrix!

Which matrices have an inverse?

  • Non-singular matrices are invertible, and have non-zero determinants
  • Singular matrices are non-invertible, and have a zero determinant

Neural networks and matrices

Quiz:

Answer:

  • Lottery: 1pt
  • Win: 1pt
  • Threshold: 1.5pt

Graphical representation of best fit linear classifier:

You can see how the one-hot vectors' dot product with the model vecot returns a vector of products. We can then check whether it's above/below the threshold:

We can reconfigure the threshold as the bias. THen we can just check whether product is positive/negative:

AND operator can be modeled as a perceptron (remember ML course went over this):

Represented as a perceptron:

Quiz

Week 4 - Determinants and Eigenvectors

Rank of linear transformations

Rank: The rank of a linear transform is equal to the resulting number of dimensions afterwards.

Examples of rank: (see diagram above):

  1. (Left) Non-singular matrix sends a 2x2 matrix into a 2x2 matrix, ie. 2D result ie. Rank 2
  2. (Center) Singular matrix sends 2x2 matrix into a line, ie. 1D result ie Rank 1
  3. (Right) Singular matrix of zero sends 2x2 matrix into a point, ie. 0D result ie Rank 0

Determinant as an area

Determinant: The resulting area of a matrix after linear transform.

  • If a singular matrix is used for linear transform, the result loses a dimension (e.g. 3d -> 2d, 2d -> 1d).
  • Determinants can go negative, but it doesn't affect the singularity of the matrix.

Determinant of a product

Rule: If you multiply two matrices ($A_1$ & $A_2$), its resulting determinant is just the product of the two matrices.

$$ \det(A_1 \cdot A_2) = det(A_1) \cdot det(A_2) $$

$$ det(AB)=det(A)det(B) $$

Rule: The product of a singular and a non-singular matrix (in any order) is singular

  • Since $det(AB)=det(A)\cdot det(B)$, if either A or B has determinant 0, it will vanish the product. Therefore, $det(AB)=0$ and the resulting matrix is singular.

Determinants of inverses

Rule: The inverse of a matrix's determinant is the inversed matrix's determinant

$$ det(A^{-1}) = \frac{1}{det(A)} $$

Proof:

  1. Given determinant of a product rule
  2. Make B equal to inverse of A: $det(A^{-1})$
  3. This gives an identity matrix: $det(AA^{-1})$.
  4. Determinant of an identity matrix is one.
  5. Therefore, $det(A^{-1})$ is equal to $\frac{1}{det(A)}$
In [11]:
a = np.array([0.25, -0.25])
b = np.array([-0.125, 0.625])
np.linalg.det([a,b])
Out[11]:
0.12500000000000003
In [12]:
# Code for Quiz

# Calculate determinant of a matrix
a = np.array([[1,2,-1], [1,0,1], [0,1,0]])
print(np.linalg.det(a))

# Calculate inverse matrix
print(np.linalg.inv(a))

# Multiply inverse with identity matrix
print(np.matmul(np.linalg.inv(a), np.identity(3)))

# Is the rank of a 3x3 identity matrix singular or non-singular
print(np.linalg.matrix_rank(np.identity(3)))

# Multiply matrix W with vector b
W = np.array([[1,2,-1],[1,0,1],[0,1,0]])
b = np.array([5,-2,0])
print(np.dot(W, b))

# Dot product of vector z1 and z2
z1 = np.array([3,1,7])
z2 = np.array([2,2,0])
print(np.dot(z1, z2))

# Multiply two matrices a and b
a = np.array([[5,2,3],[-1,-3,2],[0,1,-1]])
b = np.array([[1,0,-4],[2,1,0],[8,-1,0]])
print(np.matmul(a,b))

# Calculate the determinant of the inverse of above matrix
print(np.linalg.det(np.linalg.inv(np.matmul(a,b))))
-2.0
[[ 0.5  0.5 -1. ]
 [-0.  -0.   1. ]
 [-0.5  0.5  1. ]]
[[ 0.5  0.5 -1. ]
 [ 0.   0.   1. ]
 [-0.5  0.5  1. ]]
3
[ 1  5 -2]
8
[[ 33  -1 -20]
 [  9  -5   4]
 [ -6   2   0]]
902163386893.1295

Bases

Bases are two vectors coming from the origins.

  • Main property of a basis is that every point in the space can be expressed as a linear combination of elements in the basis

What's not a basis?

  1. Two vectors in the same directions
  2. (or in opposite direction)

Span

The span of a set of vectors is the set of points that can be reached by walking in the direction of these vectors in any combination.

  • All bases (in 2D) can cover the entire plane
  • Non-bases can only cover a 1D line

A basis is a minimal spanning set

  • One vector can be a basis, but not if more than one are on the same trajectory.
  • 3 vectors on a 2d plane is not a basis (but a subset of 2 out of 3 is)

The number of elements in the basis is the dimensions

  • 1 vector = 1 dimension
  • 2 vectors (not on same trajectory) = 2 dimensions

Example of a "change of basis":

Eigenbases

From ChatGPT:

An eigenbasis is a set of linearly independent eigenvectors of linear transformation or a square matrix

  • Eigenvector: An eigenvector of a linear transformation or a matrix is a non-zero vector that, when transformed by the linear transformation or multiplied by the matrix, remains proportional to itself, possibly with a scaling factor known as the eigenvalue.
  • Eigenvalue: An eigenvalue associated with an eigenvector represents the scaling factor by which the eigenvector is stretched or compressed during the transformation. It's a scalar value.

Eigenvalues and Eigenvectors

References:

My notes:

  • Eigenvalues ($\lambda$): Scalars. Value to stretch an eigenvector.
  • Eigenvectors ($\vec{v}$): Vectors that keep the same span even after linear transformation, ie. matrix multiplication.

Finding eigenvalues pt.1

Given:

  1. $A$ - transformation matrix
  2. $\vec{v}$ - eigenvector
  3. $\lambda$ - eigenvalue

Matrix-mult with $A\vec{v}$ is equal to scalar mult $\lambda \vec{v}$. To find eigenvalue/vector, we solve for $\lambda$ and $\vec{v}$:

$$ \begin{align*} A\vec{v} &= \lambda \vec{v} \\ \text{ (Matrix-vector multiplication) } &= \text{ (Scalar multiplication)} \end{align*} $$

Steps to solve it

Step 1 - $\lambda$ is actually an identity matrix, so we can use $I$ as the identity matrix of 1

$$ A\vec{v} = (\lambda I) \vec{v} $$

Step 2 - Move all to one side to solve for zero, then factor out the $\vec{v}$

$$ \begin{align*} A\vec{v} - (\lambda I) \vec{v} &= \vec{0} \\ (A - \lambda I)\vec{v} &= \vec{0} \end{align*} $$

Step 3 - Find $\lambda$ so that the determinant of $A - \lambda I$ is zero.

$$ det(A-\lambda I) = 0 $$

Example: Given a matrix of coordinates $\begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}$:

  1. Find eigenvalue by solving determinants equal to zero.
$$ det(\begin{bmatrix} 3-\lambda & 1 \\ 0 & 2-\lambda \end{bmatrix}) = (3-\lambda)(2-\lambda) = \text{2, 3} $$
  1. Find eigenvectors by plugging in eigenvalues back to matrix and solving for eigenvector ($v$)
$$ \begin{align*} \begin{bmatrix} 3-2 & 1 \\ 0 & 2-2 \end{bmatrix}\begin{bmatrix}v_1 \\ v_2 \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ \begin{bmatrix} 3-3 & 1 \\ 0 & 2-3 \end{bmatrix}\begin{bmatrix}v_1 \\ v_2 \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{align*} $$
  1. Eigenvector is $\begin{bmatrix}1\\0\end{bmatrix}$
  • (there are others like [-0.707, 0.707])

Calculation in numpy

In [16]:
A = np.array([[3,1],[0,2]])
val, vec = np.linalg.eig(A)
print("Eigenvalue", val)
print("Eigenvectors", vec)
Eigenvalue [3. 2.]
Eigenvectors [[ 1.         -0.70710678]
 [ 0.          0.70710678]]

Finding eigenvalues pt.2

It's determinant is given by this equation when we expanded, and that's called the characteristic polynomial. Finding where this is zero are the eigenvalues.

Then you can plug them back into the original matrix $m1$ to find eigenvectors $x$ and $y$:

Solution:

Quiz

In [ ]: