Particle filter

Particle filter

Particle filters, also known as sequential Monte Carlo methods (SMC), are sophisticated model estimation techniques based on simulation.

They are usually used to estimate Bayesian models and are the sequential ('on-line') analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. If well-designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than either the EKF or UKF. The approaches can also be combined by using a version of the Kalman filter as a proposal distribution for the particle filter.

Goal

The particle filter aims to estimate the sequence of hidden parameters, x_k for k=0,1,2,3, ldots, based only on the observed data y_k for k=0,1,2,3, ldots. All Bayesian estimates of x_k follow from the posterior distribution p(x_k|y_0,y_1,ldots,y_k). In contrast, the MCMC or importance sampling approach would model the full posterior p(x_0,x_1,ldots,x_k|y_0,y_1,ldots,y_k).

Model

Particle methods assume x_k and the observations y_k can be modeled in this form:

*x_0, x_1, ldots is a first order Markov process such that
*:x_k|x_{k-1} sim p_{x_k|x_{k-1(x|x_{k-1})and with an initial distribution p(x_0).
*The observations y_0, y_1, ldots are conditionally independent provided that x_0, x_1, ldots are known:In other words, each y_k only depends on x_k::y_k|x_k sim p_{y|x}(y|x_k)

One example form of this scenario is

:x_k = f(x_{k-1}) + v_k ,:y_k = h(x_k) + w_k ,

where both v_k and w_k are mutually independent and identically distributed sequences with known probability density functions and f(cdot) and h(cdot) are known functions.These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions f(cdot) and h(cdot) were linear, and if both v_k and w_k were Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation. Particle filters are also an approximation, but with enough particles can be much more accurate.

Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution p(x_k|y_0,dots,y_k). So, with P samples, expectations with respect to the filtering distribution are approximated by

:int f(x_k)p(x_k|y_0,dots,y_k)dx_kapproxfrac1Psum_{L=1}^Pf(x_k^{(L)})

and f(cdot), in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

Sampling Importance Resampling (SIR)

"Sampling importance resampling (SIR)" is a very commonly usedparticle filtering algorithm, which approximates the filteringdistribution p(x_k|y_0,ldots,y_k) by a weighted setof P particles

: {(w^{(L)}_k,x^{(L)}_k)~:~L=1,ldots,P}.

The "importance weights" w^{(L)}_k are approximations tothe relative posterior probabilities (or densities) of the particlessuch that sum_{L=1}^P w^{(L)}_k = 1.

SIR is a sequential (i.e., recursive) version of importance sampling.As in importance sampling, the expectation of a functionf(cdot) can be approximated as a weighted average

: int f(x_k) p(x_k|y_0,dots,y_k) dx_k approxsum_{L=1}^P w^{(L)} f(x_k^{(L)}).

For a finite set of particles, the algorithm performance is dependent on the choice of the"proposal distribution"

: pi(x_k|x_{0:k-1},y_{0:k}), .

The "optimal proposal distribution" is given as the "target distribution": pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k}). ,

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:: pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1}). ,"Sampling Importance Resampling" (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

"Resampling" is used to avoid the problem of degeneracy of thealgorithm, that is, avoiding the situation that all but one of theimportance weights are close to zero. The performance of the algorithmcan be also affected by proper choice of resampling method. The"stratified resampling" proposed by Kitagawa (1996) is optimal interms of variance.

A single step of sequential importance resampling is as follows:

:1) For L=1,ldots,P draw samples from the "proposal distribution"

:: x^{(L)}_k sim pi(x_k|x^{(L)}_{0:k-1},y_{0:k})

:2) For L=1,ldots,P update the importance weights up to a normalizing constant:

:: hat{w}^{(L)}_k = w^{(L)}_{k-1}frac{p(y_k|x^{(L)}_k) p(x^{(L)}_k|x^{(L)}_{k-1})}{pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k})}.

:3) For L=1,ldots,P compute the normalized importance weights:

:: w^{(L)}_k = frac{hat{w}^{(L)}_k}{sum_{J=1}^P hat{w}^{(J)}_k}

:4) Compute an estimate of the effective number of particles as

:: hat{N}_mathit{eff} = frac{1}{sum_{L=1}^Pleft(w^{(L)}_k ight)^2}

:5) If the effective number of particles is less than a given threshold hat{N}_mathit{eff} < N_{thr}, then perform resampling:

::a) Draw P particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.

::b) For L=1,ldots,P set w^{(L)}_k = 1/P.

The term " Sequential Importance Resampling" is also sometimes used when referring to SIR filters.

Sequential Importance Sampling (SIS)

* Is the same as Sampling Importance Resampling, but without the resampling stage.

"Direct version" algorithm

The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection.To generate a single sample x at k from p_{x_k|y_{1:k(x|y_{1:k}):

:1) Set p=1

:2) Uniformly generate L from {1,..., P}

:3) Generate a test hat{x} from its distribution p_{x_k|x_{k-1(x|x_{k-1|k-1}^{(L)})

:4) Generate the probability of hat{y} using hat{x} from p_{y|x}(y_k|hat{x}) where y_k is the measured value

:5) Generate another uniform u from [0, m_k]

:6) Compare u and hat{y}

::6a) If u is larger then repeat from step 2

::6b) If u is smaller then save hat{x} as x{k|k}^{(p)} and increment p

:7) If p > P then quit

The goal is to generate P "particles" at k using only the particles from k-1.This requires that a Markov equation can be written (and computed) to generate a x_k based only upon x_{k-1}.This algorithm uses composition of the P particles from k-1 to generate a particle at k and repeats (steps 2-6) until P particles are generated at k.

This can be more easily visualized if x is viewed as a two-dimensional array.One dimension is k and the other dimensions is the particle number.For example, x(k,L) would be the Lth particle at k and can also be written x_k^{(L)} (as done above in the algorithm).Step 3 generates a "potential" x_k based on a randomly chosen particle (x_{k-1}^{(L)}) at time k-1 and rejects or accepts it in step 6.In other words, the x_k values are generated using the previously generated x_{k-1}.

Other Particle Filters

* Auxiliary particle filter
* Gaussian particle filter
* Unscented particle filter
* Monte Carlo particle filter
* Gauss-Hermite particle filter
* Cost Reference particle filter
* Rao-Blackwellized particle filter

See also

* Ensemble Kalman filter
* Recursive Bayesian estimation

References

* cite book
author = Doucet, A.
coauthors = De Freitas, N.; Gordon, N.J.
year = 2001
title = Sequential Monte Carlo Methods in Practice
publisher = Springer
isbn =

* cite book
author = Cappe, O.
coauthors = Moulines, E.; Ryden, T.
year = 2005
title = Inference in Hidden Markov Models
publisher = Springer
isbn =

* cite book
author = Liu, J.
year = 2001
title = Monte Carlo strategies in Scientific Computing
publisher = Springer
isbn =

* cite book
author = Ristic, B.
coauthors = Arulampalam, S.; Gordon, N.
year = 2004
title = Beyond the Kalman Filter: Particle Filters for Tracking Applications
publisher = Artech House
isbn =

* cite journal
author = Doucet, A.
coauthors = Godsill, S.; Andrieu, C.;
year = 2000
title = On Sequential Monte Carlo Methods for Bayesian Filtering
journal = Statistics and Computing
volume = 10
issue = 3
pages = 197-208
url = http://www.springerlink.com/content/q6452k2x37357l3r/

* cite journal
author = Arulampalam, M.S.
coauthors = Maskell, S.; Gordon, N.; Clapp, T.;
year = 2002
title = A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking
journal = IEEE Transactions on Signal Processing
volume = 50
issue = 2
pages = 174–188
url = http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=978374
doi = 10.1109/78.978374

* cite journal
author = Cappe, O.
coauthors =Godsill, S.; Moulines, E.;
year = 2007
title = An overview of existing methods and recent advances in sequential Monte Carlo
journal = Proceedings of IEEE
volume = 95
issue = 5
doi = 10.1109/JPROC.2007.893250
pages = 899

* cite journal
author = Kotecha, J.H.
coauthors = Djuric, P.;
year = 2003
title =Gaussian Particle filtering
volume = 51
issue = 10
journal = IEEE Transactions Signal Processing

* cite journal
author = Haug, A.J.
year = 2005
title = A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes
journal = The MITRE Corporation, USA, Tech. Rep., Feb
url = http://www.mitre-corporation.net/work/tech_papers/tech_papers_05/05_0211/05_0211.pdf
accessdate = 2008-05-06

* cite journal
author = Pitt, M.K.
coauthors = Shephard, N.
year = 1999
title = Filtering Via Simulation: Auxiliary Particle Filters
journal = Journal of the American Statistical Association
volume = 94
issue = 446
pages = 590–591
url = http://www.questia.com/PM.qst?a=o&se=gglsc&d=5002321997
accessdate = 2008-05-06
doi = 10.2307/2670179

External links

* [http://www-sigproc.eng.cam.ac.uk/smc/ Sequential Monte Carlo Methods (Particle Filtering)] homepage on University of Cambridge
* [http://www.cs.washington.edu/ai/Mobile_Robotics/mcl/ Dieter Fox's MCL Animations]
* [http://web.engr.oregonstate.edu/~hess/ Rob Hess' free software]
* [http://babel.isa.uma.es/mrpt/index.php/Particle_Filters Tutorial on particle filtering] with the MRPT C++ library, and a [http://babel.isa.uma.es/mrpt/index.php/Paper:An_Optimal_Filtering_Algorithm_for_Non-Parametric_Observation_Models_in_Robot_Localization_(ICRA_2008)#Video mobile robot localization video] .


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Auxiliary particle filter — The auxiliary particle filter is a particle filtering algorithm was introduced by Pitt and Shephard in 1999 to improve some deficiencies of the SIR algorithm when dealing with tailed observation densities.Assume that the filtered posterior is… …   Wikipedia

  • Filter paper — is a semi permeable paper barrier placed perpendicular to a liquid flow. It is used to separate fine solids from liquids. In laboratories, filter paper is usually used with a filter funnel, Hirsch, or Buchner funnel.TypesFilter paper comes in… …   Wikipedia

  • Particle (ecology) — In marine and freshwater ecology, a particle is a small object. Particles can remain in suspension in the ocean or freshwater, however they eventually settle (rate determined by Stokes law) and accumulate as sediment. Some can enter the… …   Wikipedia

  • Particle counter — A particle counter is an instrument that detects and counts particles. Applications of particle counters are separated into two primary categories:*Aerosol particle counters *Liquid particle countersAerosol particle countersAerosol Particle… …   Wikipedia

  • Filter feeder — [ left|thumb|200px|Krill feeding under high phytoplankton concentration (slowed down by a factor of 12).] Filter feeders (also known as suspension feeders) are animals that feed by straining suspended matter and food particles from water,… …   Wikipedia

  • particle — 01. In order to wash sand [particles] out of her eyes, and rid her body of excess salt, the female green turtle sheds tears as she lays her eggs on the beach. 02. Comets are surrounded by tiny dust [particles], which are swept into a long tail by …   Grammatical examples in English

  • Ensemble Kalman filter — The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter …   Wikipedia

  • Kalman filter — Roles of the variables in the Kalman filter. (Larger image here) In statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise (random variations)… …   Wikipedia

  • Nonlinear filter — A nonlinear filter is a signal processing device whose output is not a linear function of its input. Terminology concerning the filtering problem may refer to the time domain (state space) showing of the signal or to the frequency domain… …   Wikipedia

  • Extended Kalman filter — In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about the current mean and covariance. The EKF is often considered the de facto standard in the theory of nonlinear state… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”