The Kalman filter estimates the state of a system at time $k$ via the linear stochastic difference equation considering the state of a system at time $k$ is evolved from the previous state at time $k-1$. See the ref. https://en.wikipedia.org/wiki/Kalman_filter
In other words, the purpose of Kalman filter is to predict the next state via using prior knowledge of the current state.
In this repository Hybrid Kalman filter is implemented considering continuous-time model while discrete-time measurements. See the ref. - https://en.wikipedia.org/wiki/Kalmanfilter#HybridKalman_filter
Define mentioned linear stochastic difference equation:
$$x{k} = A⋅x{k-1} + B⋅u{k-1} + w{k-1} \tag{1}$$
Define measurement model: $$z{k} = H⋅x{k} + v_{k}\tag{2}$$
Let's denote variables:
Let's use the dash sign " $-$ " as superscript to indicate the a priory state.
A priory state in matrix notation is defined as
$$\hat{x}^-{k} = A⋅\hat{x}{k-1} + B⋅u_{k-1} \tag{5}$$
$$\text{, where $\hat{x}^-{k}$ - a priory state (a.k.a. predicted), $\hat{x}{k-1}$ - a posteriory state (a.k.a. previous)} $$
Note: A posteriory state $\hat{x}_{k-1}$ on 0-th time step (initial) should be guessed
Error covariance matrix $P^-$ is defined as
$$P^-{k} = A⋅P{k-1}⋅A^{T} + Q \tag{6}$$
$$\text{, where $P_{k-1}$ - previously estimated error covariance matrix of size $n \times n$ (should match transition matrix dimensions), Q - process noise covariance}$$
Note: $P_{k-1}$ on 0-th time step (initial) should be guessed
The Kalman gain (which minimizes the estimate variance) in matrix notation is defined as:
$$K{k} = P^-{k}⋅H^{T}⋅(H⋅P^-_{k}⋅H^{T}+R)^{-1} \tag{7}$$
$$\text{, where H - transformation matrix, R - measurement noise covariance}$$
After evaluating the Kalman gain we need to update a priory state $\hat{x}^-_{k}$. In order to do that we need to calculate measurement residual:
$$r{k} = z{k} - H⋅\hat{x}^-_{k} \tag{8}$$
$$\text{, where $z{k}$ - true measurement, $H⋅\hat{x}^-{k}$ - previously estimated measurement}$$
Then we can update predicted state $\hat{x}_{k}$:
$$\hat{x}{k} = \hat{x}^-{k} + K{k}⋅r{k}$$
$$\text{or} \tag{9}$$
$$\hat{x}{k} = \hat{x}^-{k} + K{k}⋅(z{k} - H⋅\hat{x}^-_{k})$$
After that we should update error covariance matrix $P{k}$ which will be used in next time stap (an so on): $$P{k} = (I - K{k}⋅H)⋅P^-{k}\tag{10}$$ $$\text{, where $I$ - identity matrix (square matrix with ones on the main diagonal and zeros elsewhere)}$$
The whole algorithm can be described as high-level diagram:
Fig 1. Operation of the Kalman filter. Welch & Bishop, 'An Introduction to the Kalman Filter'
Considering acceleration motion let's write down its equations:
Velocity: $$v = v_{0} + at \tag{11}$$ $$v(t) = x'(t) $$ $$a(t) = v'(t) = x''(t)$$
Position: $$x = x{0} + v{0}t + \frac{at^2}{2} \tag{12}$$
Let's write $(11)$ and $(12)$ in Lagrange form:
$$x'{k} = x'{k-1} + x''_{k-1}t \tag{13}$$
$$x{k} = x{k-1} + x'{k-1}\Delta t + \frac{x''{k-1}(\Delta t^2)}{2} \tag{14}$$
State vector $x_{k}$ looks like:
$$x{k} = \begin{bmatrix} x{k} \ x'{k} \end{bmatrix} = \begin{bmatrix} x{k-1} + x'{k-1}\Delta t + \frac{x''{k-1}(\Delta t^2)}{2} \ x'{k-1} + x''{k-1}t \end{bmatrix} \tag{15}$$
Matrix form of $x_{k}$:
$$x{k} = \begin{bmatrix} x{k} \ x'{k} \end{bmatrix} = \begin{bmatrix} 1 & \Delta t \ 0 & 1\end{bmatrix} ⋅ \begin{bmatrix} x{k-1} \ x'{k-1} \end{bmatrix} + \begin{bmatrix} \frac{\Delta t^2}{2} \ \Delta t \end{bmatrix} ⋅ x''{k-1} = \begin{bmatrix} 1 & \Delta t \ 0 & 1\end{bmatrix} ⋅ x{k-1} + \begin{bmatrix} \frac{\Delta t^2}{2} \ \Delta t \end{bmatrix} ⋅ x''{k-1} \tag{16}$$
Taking close look on $(16)$ and $(1)$ we can write transition matrix $A$ and control input matrix $B$ as follows:
$$A = \begin{bmatrix} 1 & \Delta t \ 0 & 1\end{bmatrix} \tag{17}$$
$$B = \begin{bmatrix} \frac{\Delta t^2}{2} \ \Delta t \end{bmatrix} \tag{18}$$
Let's find transformation matrix $H$. According to $(2)$:
$$z{k} = H⋅x{k} + v{k} = \begin{bmatrix} 1 & 0 \end{bmatrix} ⋅\begin{bmatrix} x{k} \ {x'{k}} \end{bmatrix} + v{k} \tag{19}$$
$$ H = \begin{bmatrix} 1 & 0 \end{bmatrix} \tag{20}$$
Notice: $v_{k}$ in $(19)$ - is not speed, but measurement noise! Don't be confused with notation. E.g.:
$$ \text{$ x{k} = \begin{bmatrix} 375.74 \ 0 - \text{assume zero velocity} \end{bmatrix} $, $ v{k} = 2.64 => $} $$
$$ \text{$ => z{k} = \begin{bmatrix} 1 & 0 \end{bmatrix} ⋅\begin{bmatrix} 375.74 \ 0 \end{bmatrix} + 2.64 = \begin{bmatrix} 378.38 & 2.64 \end{bmatrix} $ - you can see that first vector argument it is just noise $v{k}$ added to observation $x_{k}$}$$
$$ \text{and the second argument is noise $v_{k}$ itself.}$$
Process noise covariance matrix $Q$:
$$ Q = \begin{bmatrix} \sigma^2{x} & \sigma{x} \sigma{x'} \ \sigma{x'} \sigma{x} & \sigma^2{x'}\end{bmatrix} \tag{21}$$
$$\text{, where} $$
$$ \text{$\sigma_{x}$ - standart deviation of position} $$
$$ \text{$\sigma_{x'}$ - standart deviation of velocity} $$
Since we know about $(14)$ we can define $\sigma{x}$ and $\sigma{x'}$ as:
$$ \sigma{x} = \sigma{x''} \frac{\Delta t^2}{2} \tag{22}$$
$$ \sigma{x'} = \sigma{x''} \Delta t \tag{23}$$
$$\text{, where $\sigma_{x''}$ - standart deviation of acceleration (tuned value)} $$
And now process noise covariance matrix $Q$ could be defined as:
$$ Q = \begin{bmatrix} (\sigma{x''} \frac{\Delta t^2}{2})^2 & \sigma{x''} \frac{\Delta t^2}{2} \sigma{x''} \Delta t \ \sigma{x''} \Delta t \sigma{x''} \frac{\Delta t^2}{2} & (\sigma{x''} \Delta t)^2 \end{bmatrix} = $$
$$ = \begin{bmatrix} (\sigma{x''} \frac{\Delta t^2}{2})^2 & (\sigma{x''})^2 \frac{\Delta t^2}{2} \Delta t \ (\sigma{x''})^2 \Delta t \frac{\Delta t^2}{2} & (\sigma{x''} \Delta t)^2 \end{bmatrix} = \begin{bmatrix} (\frac{\Delta t^2}{2})^2 & \frac{\Delta t^2}{2} \Delta t \ \Delta t \frac{\Delta t^2}{2} & \Delta t^2 \end{bmatrix} \sigma^2_{x''}$$
$$ = \begin{bmatrix} \frac{\Delta t^4}{4} & \frac{\Delta t^3}{2} \ \frac{\Delta t^3}{2} & \Delta t^2 \end{bmatrix} \sigma^2_{x''} \tag{24}$$
Covariance of measurement noise $R$ is scalar (matrix of size $1 \times 1$) and it is defined as variance of the measurement noise:
$$ R = \sigma^2_{z}\tag{25}$$
Rust implementation is here
Example of usage: ```rust let dt = 0.1; let u = 2.0; let stddeva = 0.25; let stddevm = 1.2;
let t: nalgebra::SVector::<f32, 1000> = nalgebra::SVector::<f32, 1000>::from_iterator(float_loop(0.0, 100.0, dt));
let track = t.map(|t| dt*(t*t - t));
let mut kalman = Kalman1D::new(dt, u, std_dev_a, std_dev_m);
let mut measurement: Vec<f32> = vec![];
let mut predictions: Vec<f32>= vec![];
for (t, x) in t.iter().zip(track.iter()) {
// Add some noise to perfect track
let v: f32 = StdRng::from_entropy().sample::<f32, Standard>(Standard) * (50.0+50.0) - 50.0; // Generate noise in [-50, 50)
let z = kalman.H.x * x + v;
measurement.push(z);
// Predict stage
kalman.predict();
predictions.push(kalman.x.x);
// Update stage
kalman.update(z).unwrap();
}
println!("time;perfect;measurement;prediction");
for i in 0..track.len() {
println!("{};{};{};{}", t[i], track[i], measurement[i], predictions[i]);
}
```
How exported chart does look like:
@todo: physical model / text / code / plots
I did struggle on displaying matrices in GitHub's MathJax markdown. If you know better way to do it you are welcome