Home Features
Data Analytics: Application of Linear Algebra - Fibonacci Sequence PDF Print E-mail
Written by FemiByte   
Thursday, 14 February 2013 05:30

Application of Linear Algebra - Fibonacci sequence

The Fibonacci sequence is defined as : \[ f_{n+1} = f_n + f_{n-1},\text{ }f_0=0, f_1=1 \] Thus for $n=2$, we have $f_2 = f_1 + f_0$ We can write the above equation in matrix form as follows: \[ \left[ \begin{array}{c}f_2\\f_1\end{array}\right]= \left[ \begin{array}{cc}1&1\\1&0\end{array}\right] \left[ \begin{array}{c}f_1\\f_0\end{array}\right] \] Let us refer to the matrix $\left[ \begin{array}{cc}1&1\\1&0\end{array}\right]$ as $A$.
If we iterate using the above matrix equation, we discover that we can obtain an expression for $f_n$ by multiplying by $A$ $n$ times: \[ \left[ \begin{array}{c}f_{n+1}\\f_n\end{array}\right]=\left[\begin{array}{cc}1&1\\1&0\end{array}\right]^n \left[ \begin{array}{c}f_1\\f_0\end{array}\right] \]

The question then becomes how we can use the iterative relation move to arrive at a closed form expression for $f_n$. Putting on our Linear Algebra hat, we recognize that we can obtain an expression for the $n-th$ power of $A$ if $A$ is diagonalizable. To do that, let us obtain the eigenvalue from the well-known equation : $\text{det(}A-\lambda I \text{)}=0$, i.e. $(1-\lambda)(0-\lambda)-1=0$ which produces
$\lambda^2-\lambda-1=0$. Solving this equation, we obtain the following 2 eigenvalues:
\[\lambda_1=\frac{1+\sqrt{5}}{2}, \lambda_2=\frac{1-\sqrt{5}}{2} \]
Since the eigenvalues are different, we conclude that $A$ is diagonalizable and can be written as
\[AS=S \Lambda \text{ } => A = S \Lambda S^{-1} \]
where $S$ is the matrix of eigenvectors.

From that we can show that $A^n =  S \Lambda^n S^{-1}$ 1
The eigenvectors can be found by solving
\[\left[\begin{array}{cc}1-\lambda_1&&1\\1&&-\lambda_1\end{array}\right]\left[\begin{array}{c}x_1\\y_1\end{array}\right]=\left[\begin{array}{c}0\\0\end{array}\right] \]
\[\left[\begin{array}{cc}1-\lambda_2&&1\\1&&-\lambda_2\end{array}\right]\left[\begin{array}{c}x_2\\y_2\end{array}\right]=\left[\begin{array}{c}0\\0\end{array}\right] \]
which produce the following eigenvalue/eigenvector pairs:
\[\lambda_1=\frac{1+\sqrt{5}}{2}, \text{ } e_1=\left[\begin{array}{c}\frac{1+\sqrt{5}}{2}\\1\end{array}\right] \]
\[\lambda_2=\frac{1-\sqrt{5}}{2}, \text{ } e_2=\left[\begin{array}{c}\frac{1-\sqrt{5}}{2}\\1\end{array}\right] \]

We can then write $S$ as
\[ S=\left[\begin{array}{cc}\lambda_1&&\lambda_2\\1&&1\end{array}\right] =
and $\Lambda$ as
\[ \Lambda=\left[\begin{array}{cc}\lambda_1&&0\\0&&\lambda_2\end{array}\right] =

What's left for us now is to calculate $S^{-1}$.
For a $2x2$ matrix:
\[M=\left[\begin{array}{cc}a&&b\\c&&d\end{array}\right] =>
M^{-1}=\frac{1}{ad-bc}\left[\begin{array}{cc}d&&-b\\-c&&a\end{array}\right] \]
In this case
\[S^{-1}=\frac{1}{\sqrt{5}}\left[\begin{array}{cc}1&&-\frac{1-\sqrt{5}}{2}\\-1&&\frac{1+\sqrt{5}}{2}\end{array}\right] \]
We can now substitute in the expression to obtain : $A^n=S\Lambda^n S^{-1}$:
\frac{1}{\lambda_1-\lambda_2} \left[\begin{array}{cc}\lambda_1^{n+1}-\lambda_2^{n+1}&&\lambda_1^n-\lambda_2^n\\\lambda_1^n-\lambda_2^n&&\lambda_1^{n-1}-\lambda_2^{n-1}\end{array}\right]

Hence \[ \left[ \begin{array}{c}f_{n+1}\\f_n\end{array}\right]= \frac{1}{\lambda_1-\lambda_2} \left[\begin{array}{cc}\lambda_1^{n+1}-\lambda_2^{n+1}&&\lambda_1^n-\lambda_2^n\\\lambda_1^n-\lambda_2^n&&\lambda_1^{n-1}-\lambda_2^{n-1}\end{array}\right]\left[ \begin{array}{c}f_1\\f_0\end{array}\right] \]
We can then see that $f_n$ is given by
\[ f_n=\frac{1}{\lambda_1-\lambda_2}\left(\lambda_1^n-\lambda_2^n\right)=
\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^n-\left(\frac{1-\sqrt{5}}{2}\right)^n\right] \]since $f_1=1,f_0=0$

The above is our closed form expression for $f_n$.


1. Given that $A = S \Lambda S^{-1}$, we obtain $A^2 = S \Lambda S^{-1} S \Lambda S^{-1} = S \Lambda^2 S^{-1} $ since $S^{-1}S$ cancel each other out. Continuing this process until we reach $n$, we can see that $A^n = S \Lambda^n S^{-1}$

Last Updated on Saturday, 16 February 2013 08:17
Data Analytics: Summaries on Strang's Linear Algebra - Eigenvalues & Eigenvectors PDF Print E-mail
Written by FemiByte   
Tuesday, 12 February 2013 04:04

Eigenvalues and Eigenvectors

When the matrix $A$ is squared, the eigenvectors $x_1 \cdots x_n$ stay the same, the eigenvalues are squared.

Eigenvalues of well-known 2-D matrices

Projection $\lambda$=0,1
Reflection $\lambda$=-1,1

The Equation for the Eigenvalues

The number $\lambda$ is an eigenvalue of $A$ if and only if $A-\lambda I$ is singular: \[ \text{det(}A -\lambda I\text{)}=0 \] For each $\lambda$ solve $(A-\lambda I)x=0$ or $Ax=\lambda x$ to find and eigenvector $x$. To solve the eigenvalue problem for an $nxn$ matrix, follow these steps:

1. Compute the determinant of $A-\lambda I$. It is a polynomial in $\lambda$ of degree $n$.
2. Find the roots of the polynomial by solving $\text{det(}A-\lambda I)=0$. The $n$ roots are the $n$ eigenvalues of $A$. They make $A-\lambda I$ singular.
3. For each eigenvalue $\lambda$, solve $(A-\lambda I)x=0$ to find an eigenvector $x$.

Adding rows or exchanging rows changes eigenvalues.

The product of the $n$ eigenvalues equas the determinant of A. i.e. \[ \lambda_1 \cdot \lambda_2 \cdots \lambda_n = \text{det(} A \text{)}\]

The sum of the $n$ eigenvalues equals the sum of the $n$ diagonal entries of $A$. This sum along the main diagonal is called the trace of $A$: \[ \text{trace=} \lambda_1 + \lambda_2 + \cdots + \lambda_n = a_{11} + a_{22} + \cdots + a_{nn} \]

Imaginary Eigenvalues

Imaginary eigenvalues occur when the vector $x$ is rotated by 90 degrees by say a rotation vector $Q$. In this case $Qx$ cannot be in the same direction as $x$. There cannot be a real-valued eigenvector, and we end up with imaginary ones. The complex vectors $x_1=(1,i)$ and $x_2=(i,1)$ keep their direction as they are rotated. These are the properties of $Q$:

1. $Q$ is an orthogonal matrix so the absolute value of each $\lambda$ is $|\lambda|=1$.
2. $Q$ is a skew-symmetric matrix so each $\lambda$ is purely imaginary.


Key Ideas

  1. $Ax=\lambda x$ says that $x$ keeps the same direction when multiplied by $A$.
  2. $Ax=\lambda x$ says that $\text{det(}A-\lambda I)=0$. This determines $n$ eigenvalues.
  3. The eigenvalues of $A^2$ and $A^{-1}$ are $\lambda^2$ and $\lambda^{-1}$, with the same eigenvectors.
  4. The sum and product of the $\lambda$'s equal the trace (sum of $a_{ii}$) and determinant.
  5. Special matrices like projections $P$ and rotations $Q$ have special eigenvalues.


Suppose the $n$ by $n$ matrix $A$ has $n$ linearly independent eigenvectors $x_1, \cdots, x_n$. Put them unto the columns of an eigenvector matrix $S$. Then $S^{-1}AS$ is the eigenvalue matrix $\Lambda$: \[ S^{-1}AS = \left[ \begin{array}{cccc}\lambda_1&&&&\\&.&&&\\&&.&&\\&&&.&\\&&&&\lambda_n \end{array}\right] \] The matrix $A$ is diagonalized.

We can write $AS=S\Lambda$ in 2 ways: \[ S^{-1}AS=\Lambda \text{ or } A=S \Lambda S^{-1} \] The matrix $S$ has an inverse, because its columns (eigenvectors of A) were assumed to be linearly independent. Without $n$ independent eigenvectors, we can't diagonalize.

Remarks about diagonalization

  1. Any matrix that has no repeated eigenvalues can be diagonalized
  2. Eigenvectors can be multiplied by any nonzero constants
  3. Suppose the 1st column of $S$ is $x_1$. Then the 1st column of $AS$ and $S\Lambda$ are $Ax_1$ and $\lambda_1x_1$. For those to be equal, $x_1$ must be an eigenvector.
  4. Matrices that have too few eigenvectors are not diagonalizable

Matrix Powers


  1. Write $u_0$ as a combination $c_1x_1 + \cdots + c_nx_n$ of the eigenvectors. Then $c=S^{-1}u_0$.
  2. Multiply each eigenvector $x_i$ by $(\lambda_i)^k$.
  3. Add up the pieces $c_i(\lambda_i)^kx_i$ to find the solution $u_k=A^ku_0$. This is $S\Lambda^kS^{-1}u_0$.
    Solution for $u_{k+1}=Au_k$ : $u_k=A^ku_0=c_1(\lambda_i)^kx_1 + \cdots + c_n(\lambda_n)^kx_n$

Nondiagonalizable Matrices

For exceptional matrices, an eigenvalue can be repeated. Then there are 2 different ways to count its multiplicity. Always $GM \leq AM$ for each $\lambda$:

  1. Geometric Multiplicity=GM Count the independent eigenvectors for $\lambda$. This is the dimension of the nullspace of $A-\lambda I$.
  2. Algebraic Multiplicity=AM. Count the repetitions of $\lambda$ among the eigenvalues. Look at the $n$ roots of $\text{det(}A-\lambda I \text{)}=0$.

Key Ideas

  1. If $A$ has $n$ independent eigenvectors $x_1 + \cdots + x_n$, they go into the columns of $S$. $S^{-1}AS$ is diagonal. $S^{-1}AS=\Lambda \text{ or } A=S\Lambda S^{-1}$
  2. The powers of $A$ are $A^k = S\Lambda^kS^{-1}$. The eigenvectors in $S$ are unchanged.
  3. The eigenvalues of $A^k$ are $(\lambda_1)^k \cdots (\lambda_n)^k$ in the matrix $\Lambda^k$.
  4. The solution to $u_{k+1}=Au_k$ starting from $u_0$ is $u_k=A^ku_0=S\Lambda^kS^{-1}u_0$: \[u_k=A^ku_0=c_1(\lambda_i)^kx_1 + \cdots + c_n(\lambda_n)^kx_n \] provided $u_0=c_1x_1 + \cdots + c_nx_n$
  5. $A$ is diagonalizable if every eigenvalue has enough eigenvectors (GM=AM).

Last Updated on Tuesday, 12 March 2013 23:17
Data Analytics: Summaries on Strang's Linear Algebra - Determinants PDF Print E-mail
Written by FemiByte   
Friday, 11 January 2013 23:43


5.1 Properties of Determinants

1. The determinant of the $nxn$ identity matrix is 1.

\[ \left\vert \begin{array}{cc}1&0\\0&1 \end{array}\right\vert = 1\]


\[ \left\vert \begin{array}{cccc}1&0&0&...&0\\0&1&..\\.&..\\0&0&0&...&1 \end{array}\right\vert = 1\]

2. The determinant changes sign when 2 rows are exchanged (sign reversal):


\[ \left\vert \begin{array}{cc}c&d\\a&b \end{array}\right\vert = - \left\vert \begin{array}{cc}a&b\\c&d \end{array}\right\vert\]

3. The determinant is a linear function of each row separately (all other rows stay fixed).

Multiply row 1 by t:

\[ \left\vert \begin{array}{cc}ta&tb\\c&d \end{array}\right\vert =  t\left\vert\begin{array}{cc}a&b\\c&d \end{array}\right\vert \]


Add row 1 of A to row 1 of A':

\[ \left\vert \begin{array}{cc}a+a'&b+b'\\c&d \end{array}\right\vert =  \left\vert\begin{array}{cc}a&b\\c&d \end{array}\right\vert + \left\vert\begin{array}{cc}a'&b'\\c&d \end{array}\right\vert\]


4. If 2 rows of A are equal, then det A = 0.


\[ \left\vert \begin{array}{cc}a&b\\a&b \end{array}\right\vert = 0 \]


5. Subtracting a multiple of 1 row from another row leaves det A unchanged.

By rule 2 & rule 4.

\[ \left\vert \begin{array}{cc}a&b\\c-la&d-lb \end{array}\right\vert =  \left\vert\begin{array}{cc}a&b\\c&d \end{array}\right\vert \]


6. A matrix with a row of zeros has det A=0.

\[ \left\vert \begin{array}{cc}a&b\\0&0 \end{array}\right\vert = 0 \]



7. If A is triangular, then det A = $a_{11}a_{12}\cdots a_{nn}$ = product of diagonal entries.

\[ \left\vert \begin{array}{cc}a&b\\0&d \end{array}\right\vert = \left\vert \begin{array}{cc}a&0\\c&d \end{array}\right\vert = ad \]


8. If A is singular then det A$=0$. If A is invertible then det A $\neq 0$

$\left\vert \begin{array}{cc}a&b\\c&d \end{array}\right\vert $ is singular iff $ad-bc=0$.


9. The determinant of AB equals det A x det B : |AB|=|A||B|

\[ \left\vert \begin{array}{cc}a&b\\c&d \end{array}\right\vert \left\vert \begin{array}{cc}p&q\\r&s \end{array}\right\vert = \left\vert \begin{array}{cc}aq+br&aq+bs\\cp+dr&cq+ds \end{array}\right\vert \]



10. The transpose $A^T$ has the same determinant as $A$.

\[ \left\vert \begin{array}{cc}a&b\\c&d \end{array}\right\vert = \left\vert \begin{array}{cc}a&c\\b&d \end{array}\right\vert \]


Key Ideas

i. The determinant is defined by $\text{det } I=1$, sign reversal and linearity in each row.

ii. After elimination $\text{det A}$ is $\pm$ (product of the pivots).

iii. The determinant is zero exactly when $A$ is not invertible.

iv. Two remarkable properties are $\text{det }AB = \text{(det }A\text{)(det B})$ and $\text{det A}^T = \text{det A}$.



5.2 Permutations and Cofactors

The determinant can be found in 3 ways:

I. Pivots
II. Big formula
III. Cofactors


I. Pivots method


Using elimination & row exchanges, convert $A$ to $LU$. 
The permutation matrix $P$ from $PA=LU$ has determinant $\pm$.
The determinant is $\pm \times \text{(product of the pivots)}$

$\text{(det P)(det A)=(det L)(det U)}$ gives $\text{det A} = \pm(d_1d_2\cdots d_n)$


II. Big Formula method

The formula has $n!$ terms

In the big formula method each product has 1 entry from each row and 1 entry from each column. These can be obtained via a permutation matrix. The determinant is the sum of these $n!$ determinants multiplied by $\pm1$ depending upon whether the permutation matrix $P$ is even or odd. Thus we have $\text{det }A$ = sum over all $n!$ column permutations $P=( \alpha,\beta \cdots \omega) $ =  \[\sum(\text{det }P)a_{1\alpha}a_{2\beta} \cdots a_{n\omega}  = \sum\pm a_{1\alpha}a_{2\beta} \cdots a_{n\omega} \]


III. Cofactor formula

The determinant is the dot product of any row $i$ of $A$ with its cofactors using other rows: \[\text{det }A = a_{i1}C_{i1}+a_{i2}C_{i2} + \cdots + a_{in}C_{in} \]

Each cofactor $C_{ij}$ (order $n-1$, without row $i$ and column $j$) includes its correct sign: $C_{ij} = (-1)^{i+j} \text{det }M_{ij}$



5.3 Cramer's Rule, Inverses and Volumes

I. Cramer's Rule

If $\text{det A}$ is not zero, $Ax=b$ has the unique solution

\[ x_1 = \frac{\text{det }B_1}{\text{det }A} \hspace{10pt} x_2 = \frac{\text{det }B_2}{\text{det }A} \hspace{10pt} \cdots \hspace{10pt} x_n = \frac{\text{det }B_n}{\text{det }A} \]

The matrix $B_j$ has the $jth$ column of $A$ replaced by the vector $b$.


II. Formula for $A^{-1}$

The $i,j$ entry of $A^{-1}$ is the cofactor $C_{ji}$ (not $C_{ij}$) divided by $\text{det }A$:

\[  (A^{-1})_{ij} =\frac{C_{ji}}{\text{det }A} \hspace{10pt} \text{and} \hspace{10pt} A^{-1} = \frac{C^T}{\text{det }A} \]

III. Area of triangle

The triangle with corners $(x_1,y_1)$ and $(x_2,y_2)$ and $(x_3,y_3)$ has area=$\frac{1}{2}$(determinant):

\[ \text{Area } = \frac{1}{2} \left\vert \begin{array}{ccc}x_1&y_1&1\\x_2&y_2&1\\x_3&y_3&1 \end{array}\right\vert \]

when $(x_3,y_3)=(0,0)$

\[ \text{Area } = \frac{1}{2} \left\vert \begin{array}{cc}x_1&y_1\\x_2&y_2 \end{array}\right\vert \]

IV. Cross Product

The cross product of $u$ = $(u_1,u_2,u_3)$ and $v$=$(v_1,v_2,v_3)$ is the vector

\[ u \text { x } v = \left\vert \begin{array}{ccc}i&j&k\\u_1&u_2&u_3\\v_1&v_2&v_3 \end{array}\right\vert = (u_2v_3-u_3v_2)i + (u_3v_1-u_1v_3)j + (u_1v_2-u_2v_1)k \]

This vector $u \text{ x } v$ is perpendicular to $u$ and $v$. The cross product $v \text{ x } u$ is $-(u \text{ x } v)$.

The length of $u \text{ x } v$ equals the area of the parallelogram with sides $u$ and $v$.

The cross product is a vector with length $||u|||v|| |\text{sin }\theta|$. Its direction is perpendicular to $u$ and $v$.

V. Triple Product

The triple product is defined as $(u \text{ x } v) \cdot w$ = \[ \left\vert \begin{array}{ccc}w_1&w_2&w_3\\u_1&u_2&u_3\\v_1&v_2&v_3 \end{array}\right\vert \]

$(u \text{ x } v) \cdot w$ equals the volume of the box with sides $u$, $v$ and $w$.

$(u \text{ x } v) \cdot w$ = $0$ exactly when the vectors $u$, $v$, $w$ lie in the same plane.

Last Updated on Monday, 28 January 2013 06:57
Math Finance Series : Mortgages ScheduledFactor and Prepayments PDF Print E-mail
Written by FemiByte   
Thursday, 25 October 2012 12:05
The formula for computing the remaining principal balance at any point in time during the amortization
period is as follows:
\[ MB_t = MB_0 \left[\frac{[(1+r_m)^n-(1+r_m)^t]}{[(1+r_m)^n-1]} \right] \]
$MB_0$= Original mortgage balance
$MB_t$ = Mortgage balance after $t$ months
$r_m$ = monthly interest rate ($r/12$)
$n$ = no. of months (360 for 30yr, 180 for 15yr etc)
The expression
\[\left[\frac{[(1+r_m)^n-(1+r_m)^t]}{[(1+r_m)^n-1]} \right] \]
\[\left[\frac{1-(1+r_m)^{t-n}}{1-(1+r_m)^{-n}} \right] \]
is known as the scheduled amortization factor (SAF) for that month.
Hence for these params:
$r_m$ = 8% p.a=8/1200=0.00667
\[SAF_{59} = \left[\frac{1 - (1+ 0.00667)^{59-360}}{1-(1+0.00667)^{-360}} \right] \]
\[ = \frac{0.8646657}{0.9085566} = 0.951692\]
Prepayment is the paydown of principal of a mortgage pass-through in a given month that exceeds the amount of its scheduled amortization for that month. The rate of prepayment is therefore the excessive paydown in a given month as a percent of the outstanding principal at the beginning of the month. This excess paydown is always measured on a monthly basis. Like interest rates, however, it is often expressed as an annualized rate. The prepayment rate of a mortgage pool is low right after its formation, but it accelerates as the pool ages.
In the initial years, therefore, the prepayment rate is often measured in conunction with the aging of the pool.
Single Monthly Mortality Rate (SMM)
Single Monthly Mortality (SMM) rate measures the percentage of a pool's outstanding balance at the beginning of
the month that was prepaid during the month.
The SMM of month $N$, $SMM_N$ is calculated as follows:
\[ SMM_N = \frac{SF_N -AF_N}{SF_N} \]
\[ SF_N = AF_{N-1}*\frac{SAF_N}{SAF_{N-1}} \]
$SF_N$= scheduled factor at the end of month $N$
$AF_N$= actual factor at the end of month $N$
$AF_{N-1}$= actual factor at the end of month $N-1$
$SAF_N$= scheduled amortization factor at the end of month $N$
$SAF_{N-1}$= scheduled amortization factor at the end of month $N-1$
It is important to understand and differentiate the concept and the calculation
of the 'scheduled factor' and the 'scheduled amortization factor' for the calculation
of SMM.
Conditional Prepayment Rate
The SMM can be converted and annualized in terms of a conditional prepayment rate (CPR).
The conversion is based on the formula:
Much of the material for this article is taken from Basics of Mortgage-Backed Securities
By Joseph Hu
Last Updated on Friday, 26 October 2012 17:35
Math Finance Series : Mortgage Prepayment Rates PDF Print E-mail
Written by FemiByte   
Wednesday, 27 June 2012 04:47

Mortgage Prepayments

A prepayment occurs when principal is returned early to the mortgage investor. Prepayments may occur in one of these scenarios:

  1. Sale of a home
  2. Borrowers prepay their mortgage
  3. Refinancing due to lower interest rate
  4. Mortgage default and foreclosure

Knowledge of prepayments is necessary to calculate cash flows and calculate the value of a mortgage backed security.


One means of measuring prepayments is to calculate what proportion of the scheduled remaining balance is prepaid each month. This measure is known as the Single Monthly Mortality rate or SMM An SMM of $s$ means that $s%$ of the scheduled remaining balance at the end of the month will prepay. SMM assumes a constant percentage prepayment of principal each month. S

The SMM is calculated as follows: \[ SMM = \frac{SB_t-AB_t}{SB_t} \] where
$SB_t$ = Scheduled principal balance at month $t$
$AB_t$ = Actual principal balance at month $t$
The average SMM over a 1 year period would be calculated to satisfy

Actual YearEnd Balance = (Scheduled YearEnd Balance) * (1-SMM)^12

The Conditional Prepayment Rate or CPR is the annualized version of an SMM. It is computed as follows: \[ CPR = 1 -(1-SMM)^{12} \] The measure above applies for the period of 1 year. For a period of $k$ months, we have the CPR for the period as \[ CPR = 1 -(1-SMM)^{12/k} \]

If we substitute the formula for SMM into the formula above, we obtain \[ CPR = 1 -\left[\frac{AB_t}{SB_t}\right]^{12/k} \]

1. Much of the material for this article is taken from Fabozzi's "Fixed Income Mathematics 3rd Ed", published by McGraw-Hill

Last Updated on Tuesday, 12 March 2013 20:42

Page 2 of 3

joomla 1.5 stats