Following the principle “You only understood something thoroughly if you can explain it” - here come the prepping notes for Machine Intelligence II. If no sources are indicated, it comes from the lecture slides.

Note This was foremostly written for my own understanding, so it might contain incomplete explanations

## General Terms and tools

A lot of the different methods rely on some general methodology that will be reused. Need a refresher in Matrix multiplication? Oh, and dot product is the same as a scalar product.

### Centered Data

Centering data means making the center of mass 0. This means every dimension is averaged and the average is then subtracted from all data points for each dimension.

also called first moment

or with numpy:

### Covariance matrix

Assuming $p$ centered data points $x^{(\alpha)}$:

### Whitened Data

Whitening turns your data matrix into a matrix with a covariance matrix which is an identity matrix. The data is then uncorrelated (but might be dependent). This is useful to find e.g. outliers.

### Kullback-Leibler-Divergence

Kullback-Leibler-Divergence measures the difference / distance between two probability distributions - in this example $P$ and $\hat P$

### Jacobian Matrix

For a function $f: I\!R ^n \rightarrow I\!R^m$ the Jacobian matrix is filled with the derivatives

### Mercers theorem

From the slides:

Every positive semidefinite definite kernel k corresponds to a scalar product in some metric feature space

### Markov Process

A markov process is only dependent on the most recent state. E.g. Its probabilities into which state it will go next are independent of any older states.

Discrete