# Particle filter

﻿
Particle filter

Particle filters, also known as sequential Monte Carlo methods (SMC), are sophisticated model estimation techniques based on simulation.

They are usually used to estimate Bayesian models and are the sequential ('on-line') analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. If well-designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than either the EKF or UKF. The approaches can also be combined by using a version of the Kalman filter as a proposal distribution for the particle filter.

Goal

The particle filter aims to estimate the sequence of hidden parameters, $x_k$ for $k=0,1,2,3, ldots$, based only on the observed data $y_k$ for $k=0,1,2,3, ldots$. All Bayesian estimates of $x_k$ follow from the posterior distribution $p\left(x_k|y_0,y_1,ldots,y_k\right)$. In contrast, the MCMC or importance sampling approach would model the full posterior $p\left(x_0,x_1,ldots,x_k|y_0,y_1,ldots,y_k\right)$.

Model

Particle methods assume $x_k$ and the observations $y_k$ can be modeled in this form:

*$x_0, x_1, ldots$ is a first order Markov process such that
*:$x_k|x_\left\{k-1\right\} sim p_\left\{x_k|x_\left\{k-1\left(x|x_\left\{k-1\right\}\right)$and with an initial distribution $p\left(x_0\right)$.
*The observations $y_0, y_1, ldots$ are conditionally independent provided that $x_0, x_1, ldots$ are known:In other words, each $y_k$ only depends on $x_k$::$y_k|x_k sim p_\left\{y|x\right\}\left(y|x_k\right)$

One example form of this scenario is

:$x_k = f\left(x_\left\{k-1\right\}\right) + v_k ,$:$y_k = h\left(x_k\right) + w_k ,$

where both $v_k$ and $w_k$ are mutually independent and identically distributed sequences with known probability density functions and $f\left(cdot\right)$ and $h\left(cdot\right)$ are known functions.These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions $f\left(cdot\right)$ and $h\left(cdot\right)$ were linear, and if both $v_k$ and $w_k$ were Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation. Particle filters are also an approximation, but with enough particles can be much more accurate.

Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution $p\left(x_k|y_0,dots,y_k\right)$. So, with $P$ samples, expectations with respect to the filtering distribution are approximated by

:$int f\left(x_k\right)p\left(x_k|y_0,dots,y_k\right)dx_kapproxfrac1Psum_\left\{L=1\right\}^Pf\left(x_k^\left\{\left(L\right)\right\}\right)$

and $f\left(cdot\right)$, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

Sampling Importance Resampling (SIR)

"Sampling importance resampling (SIR)" is a very commonly usedparticle filtering algorithm, which approximates the filteringdistribution $p\left(x_k|y_0,ldots,y_k\right)$ by a weighted setof P particles

: $\left\{\left(w^\left\{\left(L\right)\right\}_k,x^\left\{\left(L\right)\right\}_k\right)~:~L=1,ldots,P\right\}$.

The "importance weights" $w^\left\{\left(L\right)\right\}_k$ are approximations tothe relative posterior probabilities (or densities) of the particlessuch that $sum_\left\{L=1\right\}^P w^\left\{\left(L\right)\right\}_k = 1$.

SIR is a sequential (i.e., recursive) version of importance sampling.As in importance sampling, the expectation of a function$f\left(cdot\right)$ can be approximated as a weighted average

: $int f\left(x_k\right) p\left(x_k|y_0,dots,y_k\right) dx_k approxsum_\left\{L=1\right\}^P w^\left\{\left(L\right)\right\} f\left(x_k^\left\{\left(L\right)\right\}\right).$

For a finite set of particles, the algorithm performance is dependent on the choice of the"proposal distribution"

: $pi\left(x_k|x_\left\{0:k-1\right\},y_\left\{0:k\right\}\right),$.

The "optimal proposal distribution" is given as the "target distribution": $pi\left(x_k|x_\left\{0:k-1\right\},y_\left\{0:k\right\}\right) = p\left(x_k|x_\left\{k-1\right\},y_\left\{k\right\}\right). ,$

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:: $pi\left(x_k|x_\left\{0:k-1\right\},y_\left\{0:k\right\}\right) = p\left(x_k|x_\left\{k-1\right\}\right). ,$"Sampling Importance Resampling" (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

"Resampling" is used to avoid the problem of degeneracy of thealgorithm, that is, avoiding the situation that all but one of theimportance weights are close to zero. The performance of the algorithmcan be also affected by proper choice of resampling method. The"stratified resampling" proposed by Kitagawa (1996) is optimal interms of variance.

A single step of sequential importance resampling is as follows:

:1) For $L=1,ldots,P$ draw samples from the "proposal distribution"

:: $x^\left\{\left(L\right)\right\}_k sim pi\left(x_k|x^\left\{\left(L\right)\right\}_\left\{0:k-1\right\},y_\left\{0:k\right\}\right)$

:2) For $L=1,ldots,P$ update the importance weights up to a normalizing constant:

:: $hat\left\{w\right\}^\left\{\left(L\right)\right\}_k = w^\left\{\left(L\right)\right\}_\left\{k-1\right\}frac\left\{p\left(y_k|x^\left\{\left(L\right)\right\}_k\right) p\left(x^\left\{\left(L\right)\right\}_k|x^\left\{\left(L\right)\right\}_\left\{k-1\right\}\right)\right\}\left\{pi\left(x_k^\left\{\left(L\right)\right\}|x^\left\{\left(L\right)\right\}_\left\{0:k-1\right\},y_\left\{0:k\right\}\right)\right\}.$

:3) For $L=1,ldots,P$ compute the normalized importance weights:

:: $w^\left\{\left(L\right)\right\}_k = frac\left\{hat\left\{w\right\}^\left\{\left(L\right)\right\}_k\right\}\left\{sum_\left\{J=1\right\}^P hat\left\{w\right\}^\left\{\left(J\right)\right\}_k\right\}$

:4) Compute an estimate of the effective number of particles as

:: $hat\left\{N\right\}_mathit\left\{eff\right\} = frac\left\{1\right\}\left\{sum_\left\{L=1\right\}^Pleft\left(w^\left\{\left(L\right)\right\}_k ight\right)^2\right\}$

:5) If the effective number of particles is less than a given threshold $hat\left\{N\right\}_mathit\left\{eff\right\} < N_\left\{thr\right\}$, then perform resampling:

::a) Draw $P$ particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.

::b) For $L=1,ldots,P$ set $w^\left\{\left(L\right)\right\}_k = 1/P.$

The term " Sequential Importance Resampling" is also sometimes used when referring to SIR filters.

Sequential Importance Sampling (SIS)

* Is the same as Sampling Importance Resampling, but without the resampling stage.

"Direct version" algorithm

The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection.To generate a single sample $x$ at $k$ from $p_\left\{x_k|y_\left\{1:k\left(x|y_\left\{1:k\right\}\right)$:

:1) Set p=1

:2) Uniformly generate L from $\left\{1,..., P\right\}$

:3) Generate a test $hat\left\{x\right\}$ from its distribution $p_\left\{x_k|x_\left\{k-1\left(x|x_\left\{k-1|k-1\right\}^\left\{\left(L\right)\right\}\right)$

:4) Generate the probability of $hat\left\{y\right\}$ using $hat\left\{x\right\}$ from $p_\left\{y|x\right\}\left(y_k|hat\left\{x\right\}\right)$ where $y_k$ is the measured value

:5) Generate another uniform u from $\left[0, m_k\right]$

:6) Compare u and $hat\left\{y\right\}$

::6a) If u is larger then repeat from step 2

::6b) If u is smaller then save $hat\left\{x\right\}$ as $x\left\{k|k\right\}^\left\{\left(p\right)\right\}$ and increment p

:7) If p > P then quit

The goal is to generate P "particles" at $k$ using only the particles from $k-1$.This requires that a Markov equation can be written (and computed) to generate a $x_k$ based only upon $x_\left\{k-1\right\}$.This algorithm uses composition of the P particles from $k-1$ to generate a particle at $k$ and repeats (steps 2-6) until P particles are generated at $k$.

This can be more easily visualized if $x$ is viewed as a two-dimensional array.One dimension is $k$ and the other dimensions is the particle number.For example, $x\left(k,L\right)$ would be the Lth particle at $k$ and can also be written $x_k^\left\{\left(L\right)\right\}$ (as done above in the algorithm).Step 3 generates a "potential" $x_k$ based on a randomly chosen particle ($x_\left\{k-1\right\}^\left\{\left(L\right)\right\}$) at time $k-1$ and rejects or accepts it in step 6.In other words, the $x_k$ values are generated using the previously generated $x_\left\{k-1\right\}$.

Other Particle Filters

* Auxiliary particle filter
* Gaussian particle filter
* Unscented particle filter
* Monte Carlo particle filter
* Gauss-Hermite particle filter
* Cost Reference particle filter
* Rao-Blackwellized particle filter

* Ensemble Kalman filter
* Recursive Bayesian estimation

References

* cite book
author = Doucet, A.
coauthors = De Freitas, N.; Gordon, N.J.
year = 2001
title = Sequential Monte Carlo Methods in Practice
publisher = Springer
isbn =

* cite book
author = Cappe, O.
coauthors = Moulines, E.; Ryden, T.
year = 2005
title = Inference in Hidden Markov Models
publisher = Springer
isbn =

* cite book
author = Liu, J.
year = 2001
title = Monte Carlo strategies in Scientific Computing
publisher = Springer
isbn =

* cite book
author = Ristic, B.
coauthors = Arulampalam, S.; Gordon, N.
year = 2004
title = Beyond the Kalman Filter: Particle Filters for Tracking Applications
publisher = Artech House
isbn =

* cite journal
author = Doucet, A.
coauthors = Godsill, S.; Andrieu, C.;
year = 2000
title = On Sequential Monte Carlo Methods for Bayesian Filtering
journal = Statistics and Computing
volume = 10
issue = 3
pages = 197-208

* cite journal
author = Arulampalam, M.S.
coauthors = Maskell, S.; Gordon, N.; Clapp, T.;
year = 2002
title = A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking
journal = IEEE Transactions on Signal Processing
volume = 50
issue = 2
pages = 174–188
url = http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=978374
doi = 10.1109/78.978374

* cite journal
author = Cappe, O.
coauthors =Godsill, S.; Moulines, E.;
year = 2007
title = An overview of existing methods and recent advances in sequential Monte Carlo
journal = Proceedings of IEEE
volume = 95
issue = 5
doi = 10.1109/JPROC.2007.893250
pages = 899

* cite journal
author = Kotecha, J.H.
coauthors = Djuric, P.;
year = 2003
title =Gaussian Particle filtering
volume = 51
issue = 10
journal = IEEE Transactions Signal Processing

* cite journal
author = Haug, A.J.
year = 2005
title = A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes
journal = The MITRE Corporation, USA, Tech. Rep., Feb
url = http://www.mitre-corporation.net/work/tech_papers/tech_papers_05/05_0211/05_0211.pdf
accessdate = 2008-05-06

* cite journal
author = Pitt, M.K.
coauthors = Shephard, N.
year = 1999
title = Filtering Via Simulation: Auxiliary Particle Filters
journal = Journal of the American Statistical Association
volume = 94
issue = 446
pages = 590–591
url = http://www.questia.com/PM.qst?a=o&se=gglsc&d=5002321997
accessdate = 2008-05-06
doi = 10.2307/2670179

* [http://www-sigproc.eng.cam.ac.uk/smc/ Sequential Monte Carlo Methods (Particle Filtering)] homepage on University of Cambridge
* [http://www.cs.washington.edu/ai/Mobile_Robotics/mcl/ Dieter Fox's MCL Animations]
* [http://web.engr.oregonstate.edu/~hess/ Rob Hess' free software]
* [http://babel.isa.uma.es/mrpt/index.php/Particle_Filters Tutorial on particle filtering] with the MRPT C++ library, and a [http://babel.isa.uma.es/mrpt/index.php/Paper:An_Optimal_Filtering_Algorithm_for_Non-Parametric_Observation_Models_in_Robot_Localization_(ICRA_2008)#Video mobile robot localization video] .

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Auxiliary particle filter — The auxiliary particle filter is a particle filtering algorithm was introduced by Pitt and Shephard in 1999 to improve some deficiencies of the SIR algorithm when dealing with tailed observation densities.Assume that the filtered posterior is… …   Wikipedia

• Filter paper — is a semi permeable paper barrier placed perpendicular to a liquid flow. It is used to separate fine solids from liquids. In laboratories, filter paper is usually used with a filter funnel, Hirsch, or Buchner funnel.TypesFilter paper comes in… …   Wikipedia

• Particle (ecology) — In marine and freshwater ecology, a particle is a small object. Particles can remain in suspension in the ocean or freshwater, however they eventually settle (rate determined by Stokes law) and accumulate as sediment. Some can enter the… …   Wikipedia

• Particle counter — A particle counter is an instrument that detects and counts particles. Applications of particle counters are separated into two primary categories:*Aerosol particle counters *Liquid particle countersAerosol particle countersAerosol Particle… …   Wikipedia

• Filter feeder — [ left|thumb|200px|Krill feeding under high phytoplankton concentration (slowed down by a factor of 12).] Filter feeders (also known as suspension feeders) are animals that feed by straining suspended matter and food particles from water,… …   Wikipedia

• particle — 01. In order to wash sand [particles] out of her eyes, and rid her body of excess salt, the female green turtle sheds tears as she lays her eggs on the beach. 02. Comets are surrounded by tiny dust [particles], which are swept into a long tail by …   Grammatical examples in English

• Ensemble Kalman filter — The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter …   Wikipedia

• Kalman filter — Roles of the variables in the Kalman filter. (Larger image here) In statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise (random variations)… …   Wikipedia

• Nonlinear filter — A nonlinear filter is a signal processing device whose output is not a linear function of its input. Terminology concerning the filtering problem may refer to the time domain (state space) showing of the signal or to the frequency domain… …   Wikipedia

• Extended Kalman filter — In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about the current mean and covariance. The EKF is often considered the de facto standard in the theory of nonlinear state… …   Wikipedia