# Kalman Filter

## Main.KalmanFilter History

Changed line 6 from:
X_{t+1} = A X_t + N_1,
to:
X_{t+1} = A X_t + N_t,
Changed line 8 from:
Y_t = B X_t + N_2,
to:
Y_t = B X_t + M_t,
Changed lines 10-12 from:
where $A$ is a matrix called the ''dynamics matrix'' and $B$ is a matrix called the ''observation matrix''. $N_1$ and $N_2$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N_1$ and $N_2$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$.  This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function).  In this case the computation of the prediction can be done in closed form using matrix operations.
to:
where $A$ is a matrix called the ''dynamics matrix'', $B$ is a matrix called the ''observation matrix'', and $N_t$ and $M_t$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.

In the standard usage, the matrices $A$ and $B$ and the parameters of $N$ and $M$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$.  This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function).  In this case the computation of the prediction can be done in closed form using matrix operations.
July 06, 2008, at 04:59 PM by Vovk - added 2 references
* Rudolph E. Kalman (1960).  A new approach to linear filtering and prediction problems.  ''Transactions of the ASME - Journal of Basic Engineering'' '''82D''' 35 - 45.

* H. W. Sorenson (1970).  Least-squares estimation: from Gauss to Kalman.  ''IEEE Spectrum'' '''7''', 63 - 68.

July 06, 2008, at 04:51 PM by Vovk - created a stub from Yoav's description
* The state sequence $X_1,X_2,\ldots$.
* The observation sequence $Y_1,Y_2,\ldots$.
$$X_{t+1} = A X_t + N_1, \quad Y_t = B X_t + N_2,$$
where $A$ is a matrix called the ''dynamics matrix'' and $B$ is a matrix called the ''observation matrix''.  $N_1$ and $N_2$ are independent samples from multi-dimensional normal distributions. These are called the ''dynamics noise'' and the ''measurement noise'', respectively.
In the standard usage, the matrices $A$ and $B$ and the parameters of $N_1$ and $N_2$ are all assumed to be known. The algorithm recieves as input the observation sequence $y_1,y_2,\ldots, y_t$ and its goal is to estimate $x_t$.  This is done using the online Bayes estimator (which is equivalent to the [[strong aggregating algorithm]] for the log loss function).  In this case the computation of the prediction can be done in closed form using matrix operations.