Gaussian Sampling Techniques
Assume an \(N\)-dimensional Gaussian random variable $$x \sim \mathcal{N}(\hat{x}, \mathbf{C}) \enspace,$$ with mean \(\hat{x}\) and covariance matrix \(\mathbf{C}\). In nonlinear filtering it is often required to approximate the probability density function of \(x\) as a Dirac mixture, i.e., a set of \(L\) weighted samples according to $$\sum_{i=1}^{L} w_i \cdot \delta(x - x_i) \enspace,$$ where \(\delta(\cdot)\) denotes the Dirac delta distribution, \(x_i\) the sample positions, and \(w_i\) the sample weights. A special case is an equally weighted Dirac mixture which simplifies to $$\frac{1}{L} \sum_{i=1}^{L} \delta(x - x_i) \enspace.$$
For example, consider the random variable $$y = g(x)$$ that originates from a transformation of the Gaussian random variable \(x\). The mean \(\hat{y}\) of \(y\) can be approximated with a proper Dirac mixture according to $$\hat{y} = \int g(x) \cdot \mathcal{N}(x; \hat{x}, \mathbf{C}) \operatorname{d}x \approx \sum_{i=1}^{L} w_i \cdot g(x_i) \enspace.$$ Most of the sample-based Kalman Filters are based on this technique and only differ in the way how they generate such a Dirac mixture approximation.