*This material is also discussed in details with Python code in chapter 3 in my book “Intuitive Machine Learning and Explainable AI”, available here.*

This is not a traditional tutorial on linear algebra. The material presented here, in a compact style, is rarely taught in college classes. It covers a wide range of topics, while avoiding excessive use of jargon or advanced math. The fundamental tool is the power of a matrix, and its byproduct, the characteristic polynomial. It can solve countless problems, as discussed later in this article, with illustrations. In the end, it has more to do with calculus, than matrix algebra.

I used “spectacular” in my title for the following reasons:

- An application to time series (auto-regressive models) features an extremely smooth yet highly chaotic process (Brownian-related). I could not find any other examples in the literature, after extensive Google and ArXiv searches. This may be the first time that a picture is produced, for this type of strange process. You are not going to learn about it in college classes, or in textbooks. I explain why it is so smooth: it is — implicitly — an integrated Brownian motion, despite no integration being involved. My article will help you understand these advanced stochastic models used by Wall Street, without giving you a headache.
- I play with a powerful technique that can solve a large number of problems, but is rarely used, and certainly never in high dimensions, because it is extremely unstable. I found a way to make it numerically stable, and I explain in detail my related algorithm to solve the problem.
- I explain why Weibull and Fréchet distributions (used in extreme value theory) are one and the same. Statisticians have been using them for decades, without realizing that they are identical. I explain why this is the case.

This article is unusually short despite the wide spectrum of topics covered: only 8 pages long.

## Abstract

This simple introduction to matrix theory offers a refreshing perspective on the subject. Using a basic concept that leads to a simple formula for the power of a matrix, I show how it can solve time series, Markov chains, linear regression, linear recurrence equations, pseudo-inverse and square root of a matrix, data compression, principal components analysis (PCA) or dimension reduction, and other machine learning problems. These problems are usually solved with more advanced matrix algebra, including eigenvalues, diagonalization, generalized inverse matrices, and other types of matrix normalization.

My approach is more intuitive and thus appealing to professionals who do not have a strong mathematical background, or who have forgotten what they learned in math textbooks. It will also appeal to physicists and engineers, and to professionals more familiar or interested in calculus, than in matrix algebra. Finally, it leads to simple algorithms, for instance for matrix inversion. The classical statistician or data scientist will find my approach somewhat intriguing. The core of the methodology is the characteristic polynomial of a matrix, and in particular, the roots with lowest or largest moduli. It leads to a numerically stable method to solve Vandermonde systems, and thus, many linear algebra problems. Simulations include a curious fractal time series that looks incredibly smooth.

## Table of Contents

Power of a matrix

Examples, Generalization, and Matrix Inversion

- Example with a non-invertible matrix
- Fast computations
- Square root of a matrix

Application to Machine Learning problems

- Markov chains
- Time series: auto-regressive processes
- Linear regression

Mathematics of auto-regressive time series

- Simulations: curious fractal time series
- White noise: Fréchet, Weibull and exponential cases
- Illustration
- Solving Vandermonde systems: a numerically stable method

Math for Machine Learning: Must-Read Books

## Download the Article

The technical article, entitled *Gentle Introduction to Linear Algebra, with Spectacular Applications*, is accessible in the “Free Books and Articles” section, here. The text highlighted in orange in this PDF document are keywords that will be incorporated in the index, when I aggregate all my related articles into a single book about innovative machine learning techniques. The text highlighted in blue corresponds to external clickable links, mostly references. And red is used for internal links, pointing to a section, bibliography entry, equation, and so on.

*To not miss future articles, sign-up to our newsletter, here.*

You must log in to post a comment.