# Kalman Filter

## Main.KalmanFilter History

Hide minor edits - Show changes to output

Changed line 6 from:

X_{t+1} = A X_t + N_~~1~~,

to:

X_{t+1} = A X_t + N_t,

Changed line 8 from:

Y_t = B X_t + ~~N~~_~~2~~,

to:

Y_t = B X_t + M_t,

Changed lines 10-12 from:

where $A$ is a matrix called the ''dynamics matrix'' ~~and ~~$B$ is a matrix called the ''observation matrix''~~. ~~ $N_~~1~~$ and $~~N~~_~~2~~$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N~~_1~~$ and $~~N_2~~$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$. This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function). In this case the computation of the prediction can be done in closed form using matrix operations.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N

to:

where $A$ is a matrix called the ''dynamics matrix'', $B$ is a matrix called the ''observation matrix'', and $N_t$ and $M_t$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N$ and $M$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$. This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function). In this case the computation of the prediction can be done in closed form using matrix operations.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N$ and $M$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$. This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function). In this case the computation of the prediction can be done in closed form using matrix operations.

July 06, 2008, at 04:59 PM
by - added 2 references

Added lines 17-20:

* Rudolph E. Kalman (1960). A new approach to linear filtering and prediction problems. ''Transactions of the ASME - Journal of Basic Engineering'' '''82D''' 35 - 45.

* H. W. Sorenson (1970). Least-squares estimation: from Gauss to Kalman. ''IEEE Spectrum'' '''7''', 63 - 68.

* H. W. Sorenson (1970). Least-squares estimation: from Gauss to Kalman. ''IEEE Spectrum'' '''7''', 63 - 68.

July 06, 2008, at 04:51 PM
by - created a stub from Yoav's description

Added lines 1-17:

The Kalman filter is the on-line Bayes algorithm applied to a class of processes with linear dynamics and Gaussian noise. There are two vector sequences that describe the dynamics of the system:

* The state sequence $X_1,X_2,\ldots$.

* The observation sequence $Y_1,Y_2,\ldots$.

The dynamics is described by:

$$

X_{t+1} = A X_t + N_1,

\quad

Y_t = B X_t + N_2,

$$

where $A$ is a matrix called the ''dynamics matrix'' and $B$ is a matrix called the ''observation matrix''. $N_1$ and $N_2$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N_1$ and $N_2$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$. This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function). In this case the computation of the prediction can be done in closed form using matrix operations.

The Kalman filter does not operate well when the dynamics is far from linear or the noise far from Gaussian. In these cases the only semi-practical solution is "particle filters", which is a Monte Carlo method that samples from the posterior distribution over the states. It is only semi-practical because one needs to use a very large number for particles (samples) to get reliable results.

!!!References

* Arnaud Doucet,Nando De Freitas,Eric Wan (2000). The Unscented Particle Filter. Abstract: In this paper we propose a novel method for nonlinear, non-Gaussian, on-line estimation. The algorithm consists of a particle filter that uses an unscented Kalman filter (UKF) to generate the importance proposal distribution. The UKF allows the particle filter to incorporate the latest observations into a prior updating routine. In addition, the UKF generates proposal distributions that match the true posterior more closely and also has the capability of generating heavier tailed distributions than the well known extended Kalman filter. As a result, the convergence results predict that the new filter should outperform standard particle filters, extended Kalman filters and unscented Kalman filters. A few experiments confirm this prediction.

* The state sequence $X_1,X_2,\ldots$.

* The observation sequence $Y_1,Y_2,\ldots$.

The dynamics is described by:

$$

X_{t+1} = A X_t + N_1,

\quad

Y_t = B X_t + N_2,

$$

where $A$ is a matrix called the ''dynamics matrix'' and $B$ is a matrix called the ''observation matrix''. $N_1$ and $N_2$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N_1$ and $N_2$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$. This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function). In this case the computation of the prediction can be done in closed form using matrix operations.

The Kalman filter does not operate well when the dynamics is far from linear or the noise far from Gaussian. In these cases the only semi-practical solution is "particle filters", which is a Monte Carlo method that samples from the posterior distribution over the states. It is only semi-practical because one needs to use a very large number for particles (samples) to get reliable results.

!!!References

* Arnaud Doucet,Nando De Freitas,Eric Wan (2000). The Unscented Particle Filter. Abstract: In this paper we propose a novel method for nonlinear, non-Gaussian, on-line estimation. The algorithm consists of a particle filter that uses an unscented Kalman filter (UKF) to generate the importance proposal distribution. The UKF allows the particle filter to incorporate the latest observations into a prior updating routine. In addition, the UKF generates proposal distributions that match the true posterior more closely and also has the capability of generating heavier tailed distributions than the well known extended Kalman filter. As a result, the convergence results predict that the new filter should outperform standard particle filters, extended Kalman filters and unscented Kalman filters. A few experiments confirm this prediction.