Model of Kalman Filter
We assume the current state can be modeled as a Gaussian distribution
P(zt∣zt−1)∼N(Azt−1,Q) We assume neural observations can also be modeled a Gaussian distribution
P(xt∣zt)∼N(Czt,R) We also assume abase case
P(z1)∼N(Π,V) Thus the model parameters are:
Θ={A,Q,Π,V,C,R} Model Training
We aim to maximize the joint likelihood of the state and observed date
Suppose
the minimize is achieved when the derivative vanishes
Testing the Model
Measurement update
We’ll use the following notation for mean and convenience:
One step Prediction
Measurement update
We can write the measurement updates with the Kalman gain
Reference
In the testing phase, we aim to computeP(zt∣x1,…,xt). At each time step, we apply two sub-steps, a one-step prediction and then a measurement update.
We assume that we have the mean μt−1t−1and covarianceΣt−1t−1 from the previous iteration. We need to compute μtt−1and Σtt−1. Becausezt=Azt−1+γ,γ∈N(0,Q)
We take advantage of the property of the Gaussian distributions Ifx=[xa xb]∼N([μaμb],[ΣaaΣbaΣabΣbb]), then P(xa∣xb) is Gaussian with
Becausext=Czt+σ,σ∈N(0,R), ThenE[xt∣x1,…,xt−1]=E[Czt+σ∣x1,…,xt−1]=Cμtt−1. Similarly cov[xt∣x1,…,xt−1]=CΣtt−1C⊤+R
Now we have the joint distribution P([xt zt]∣x1,…,xt−1)∼N([Cμtt−1mutt−1],[CΣtt−1C⊤+RCΣtt−1 Σtt−1C⊤Σtt−1])