A Nonlinear Estimation Problem - Part II
We consider the same estimation problem from Part I. Recall that the measurement model is $$y_k = h(x_k) + v = \begin{bmatrix} \sqrt{(p^x_{k})^2 + (p^y_{k})^2} \\ \operatorname{atan2}(p^y_{k}, p^x_{k}) \end{bmatrix} + v \enspace.$$ To use nonlinear estimators to estimate the target, a likelihood function is requried. The corresponding likelihood function for the above generative measurement model is obtained according to $$f(y_k | x_k) = \int_{\mathbb{R}^2} \delta(y_k - h(x_k) - v) \cdot \mathcal{N}(v; 0, \mathbf{Q}) \operatorname{d}v = \mathcal{N}(y_k - h(x_k); 0, \mathbf{Q}) \enspace.$$
Like for the previously discussed measurement models, in order to implement the likelihood function we have to write a new class that inherits from the Likelihood interface and implement the abstract class method
logValues = logLikelihood(obj, stateSamples, measurement)
Again, the stateSamples argument is a set of samples representing the current system state, e.g., the particles of a particle filter, and the measurement argument contains the measurement passed to the filter's update() method. It is important to note that the logarithm of the likelihood function, the so-called \(\log\)-likelihood, has to be implemented rather than the likelihood function \(f(y_k | x_k)\). For the considered problem, the \(\log\)-likelihood is simply the logarithm of the Gaussian noise PDF evaluated at the difference between \(y_k\) and \(h(x_k)\) according to $$\log(f(y_k | x_k)) = \log(\mathcal{N}(y_k - h(x_k); 0, \mathbf{Q})) \enspace.$$ So the return value logValues contains in each column the \(\log\)-likelihood value based on the state sample from the corresponding column in stateSamples. The following PolarMeasLikelihood class shows how to implement the considered likelihood.
PolarMeasLikelihood.m
classdef PolarMeasLikelihood < Likelihood methods function obj = PolarMeasLikelihood() % Time-invariant, zero-mean Gaussian measurement noise obj.measNoise = Gaussian(zeros(2, 1), [1e-2 1e-4]); end % Implement the abstract logLikelihood() method inherited from Likelihood function logValues = logLikelihood(obj, stateSamples, measurements) px = stateSamples(1, :); py = stateSamples(2, :); % Evaluate deterministic part of the measurement model h = [sqrt(px.^2 + py.^2) atan2(py, px) ]; % Compute differences y - h(x) diffs = bsxfun(@minus, measurements, h); % Evaluate the measurement noise logarithmic Gaussian PDF logValues = obj.measNoise.logPdf(diffs); end end properties (Access = 'private') measNoise; end end
Note that we make use of the Gaussian distribution's logPdf() method.
In order to use the PolarMeasLikelihood class in combination with the sampling importance resampling particle filter (SIRPF), we only have to set the measModel variable in NonlinearEstimationExample.m to
measModel = PolarMeasLikelihood();
change the filter variable and add the line
filter = SIRPF(); filter.setNumParticles(10^5);
for setting the number of particles to be used.
Executing the file yields
>> NonlinearEstimationExample() Predicted state estimate: State mean: 0.9947 1.0005 0.0008 0.0008 0.0008 State covariance matrix: 9.9935 -0.0356 -0.0016 0.0802 -0.0003 -0.0356 10.0306 0.0023 0.0062 0.0010 -0.0016 0.0023 0.1012 -0.0009 0.0094 0.0802 0.0062 -0.0009 1.0996 -0.0005 -0.0003 0.0010 0.0094 -0.0005 0.1013 Updated state estimate: State mean: 2.4265 1.7634 -0.0414 0.0696 0.0103 State covariance matrix: 0.0062 0.0040 0.0017 0.0053 0.0022 0.0040 0.0039 0.0011 0.0005 0.0003 0.0017 0.0011 0.0926 -0.0001 -0.0078 0.0053 0.0005 -0.0001 0.8366 0.0756 0.0022 0.0003 -0.0078 0.0756 0.1053
Nevertheless, even if it is straight forward to implement such a likelihood, we have to reimplement the deterministic part of the measurement equation in the PolarMeasLikelihood class although we already did that in the PolarMeasModel class.
Here comes again the object-orientated design of the Nonlinear Estimation Toolbox into play: the AdditiveNoiseMeasurementModel already inherits from Likelihood and implements the logLikelihood() method based on the abstract measurementEquation() method and the measurement noise distribution set by the setNoise()! This is another reason to prefer the AdditiveNoiseMeasurementModel to the MeasurementModel class when implementing a measurement equation that suffers from pure additive noise only.
When implementing an AdditiveNoiseMeasurementModel, you get the \(\log\)-likelihood for free.
Hence, we change the measModel variable back to
measModel = PolarMeasModel();
and re-execute the example
>> NonlinearEstimationExample() Predicted state estimate: State mean: 0.9844 0.9905 -0.0004 0.0018 -0.0009 State covariance matrix: 10.0351 -0.0781 -0.0030 0.1139 0.0032 -0.0781 9.9702 0.0063 0.0049 0.0055 -0.0030 0.0063 0.1008 -0.0001 0.0101 0.1139 0.0049 -0.0001 1.1062 -0.0000 0.0032 0.0055 0.0101 -0.0000 0.1012 Updated state estimate: State mean: 2.4257 1.7668 0.0657 -0.0786 -0.0019 State covariance matrix: 0.0069 0.0044 0.0072 0.0032 0.0001 0.0044 0.0041 0.0036 0.0014 -0.0007 0.0072 0.0036 0.0809 0.0167 0.0035 0.0032 0.0014 0.0167 0.7989 -0.0540 0.0001 -0.0007 0.0035 -0.0540 0.0898
As we see, the particle filter still works!
Like for the linear estimators, also exchanging nonlinear estimators requires the same minimal effort. For example, we want to do the state estimation with another nonlinear estimator, say the Gaussian particle filter (GPF), instead of the SIRPF. As expected, we only have to change a single line:
filter = GPF();
(the GPF also has a setNumParticles() method) and run the NonlinearEstimationExample() with
>> NonlinearEstimationExample() Predicted state estimate: State mean: 0.9963 0.9970 0.0016 -0.0024 -0.0006 State covariance matrix: 9.9754 0.0095 -0.0041 0.0965 -0.0032 0.0095 9.9173 0.0001 -0.0067 0.0018 -0.0041 0.0001 0.1006 0.0022 0.0100 0.0965 -0.0067 0.0022 1.0988 0.0005 -0.0032 0.0018 0.0100 0.0005 0.1010 Updated state estimate: State mean: 2.4156 1.7601 0.0486 -0.1962 0.0737 State covariance matrix: 0.0062 0.0038 -0.0013 0.0047 -0.0046 0.0038 0.0036 -0.0012 0.0058 -0.0031 -0.0013 -0.0012 0.1414 0.0003 0.0201 0.0047 0.0058 0.0003 1.0463 0.0108 -0.0046 -0.0031 0.0201 0.0108 0.0807
So far, we learned how to implement models and use them in combination with linear and nonlinear estimators. In the following, we discuss how to generate random system state trajectories and measurements based on the implemented system and measurement models. This allows us to easily perform meaningful simulations of a given estimation problem.