Multiple-try Metropolis

Multiple-try Metropolis

In Markov chain Monte Carlo, the Metropolis–Hastings algorithm (MH) can be used to sample from a probability distribution which is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary. In many cases, one uses a Gaussian distribution centered on the current point in the probability space, of the form Q(x'; x^t)=\mathcal{N}(x^t;\sigma^2 I) \,. This proposal distribution is convenient to sample from and may be the best choice if one has little knowledge about the target distribution, \pi(x) \,. If desired, one can use the more general multivariate normal distribution, Q(x'; x^t)=\mathcal{N}(x^t;\mathbf{\Sigma}), where \mathbf{\Sigma} is the covariance matrix which the user believes is similar to the target distribution.

Although this method must converge to the stationary distribution in the limit of infinite sample size, in practice the progress can be exceedingly slow. If \sigma^2 \, is too large, almost all steps under the MH algorithm will be rejected. On the other hand, if \sigma^2 \, is too small, almost all steps will be accepted, and the Markov chain will be similar to a random walk through the probability space. In the simpler case of Q(x'; x^t)=\mathcal{N}(x^t;I) \,, we see that N \, steps only takes us a distance of \sqrt{N} \,. In this event, the Markov Chain will not fully explore the probability space in any reasonable amount of time. Thus the MH algorithm requires reasonable tuning of the scale parameter (\sigma^2 \, or \mathbf{\Sigma}).

Contents

Problems with high dimensionality

Even if the scale parameter is well-tuned, as the dimensionality of the problem increases, progress can still remain exceedingly slow. To see this, again consider Q(x'; x^t)=\mathcal{N}(x^t;I) \,. In one dimension, this corresponds to a Gaussian distribution with mean 0 and variance 1. For one dimension, this distribution has a mean step of zero, however the mean squared step size is given by

\langle x^2 \rangle =\int_{-\infty}^{\infty}x^2\frac{1}{\sqrt{2 \pi}}e^{-\frac{x^2}{2}}=1

As the number of dimensions increases, the expected step size becomes larger and larger. In N \, dimensions, the probability of moving a radial distance P_n(r) \, is related to the Chi distribution, and is given by

P_n(r) \propto r^{n-1}e^{-r^2/2}

This distribution is peaked at r=\sqrt{N-1} \, which is \approx\sqrt{N} \, for large N \,. This means that the step size will increase as the roughly the square root of the number of dimensions. For the MH algorithm, large steps will almost always land in regions of low probability, and therefore be rejected.

If we now add the scale parameter \sigma^2 \, back in, we find that to retain a reasonable acceptance rate, we must make the transformation \sigma^2 \rightarrow \sigma^2/N. In this situation, the acceptance rate can now be made reasonable, but the exploration of the probability space becomes increasingly slow. To see this, consider a slice along any one dimension of the problem. By making the scale transformation above, the expected step size is any one dimension is not \sigma \, but instead is \sigma/\sqrt{N}. As this step size is much smaller than the "true" scale of the probability distribution (assuming that \sigma \, is somehow known a priori, which is the best possible case), the algorithm executes a random walk along every parameter.

Multiple-try Metropolis

Liu et al. (2000) have suggested a modified MH algorithm, which they call the Multiple-try Metropolis algorithm (MTM), which allows larger step sizes whilst still retaining a reasonable acceptance rate.

Suppose Q(\mathbf{x},\mathbf{y}) is an arbitrary proposal function. We require that Q(\mathbf{x},\mathbf{y})>0 only if Q(\mathbf{y},\mathbf{x})>0. Additionally, \pi(\mathbf{x}) is the likelihood function.

Define w(\mathbf{x},\mathbf{y})=\pi(\mathbf{x})Q(\mathbf{x},\mathbf{y})\lambda(\mathbf{x},\mathbf{y}) where \lambda(\mathbf{x},\mathbf{y}) is a non-negative symmetric function in \mathbf{x} and \mathbf{y} that can be chosen by the user.

Now suppose the current state is \mathbf{x}. The MTM algorithm is as follows:

1) Draw k independent trial proposals \mathbf{y}_1,\ldots,\mathbf{y}_k from Q(\mathbf{x},.). Compute the weights w(\mathbf{y}_j,\mathbf{x}) for each of these.

2) Select \mathbf{y} from the \mathbf{y}_i with probability proportional to the weights.

3) Now produce a reference set by drawing \mathbf{x}_1,\ldots,\mathbf{x}_{k-1} from the distribution Q(\mathbf{y},.). Set \mathbf{x}_k=\mathbf{x} (the current point).

4) Accept \mathbf{y} with probability

r=\text{min} \left(1, \frac{ w(\mathbf{y}_1,\mathbf{x} )+ \ldots+ w(\mathbf{y}_k,\mathbf{x}) }{ w(\mathbf{x}_1,\mathbf{y})+ \ldots+ w(\mathbf{x}_k,\mathbf{y}) } \right)

It can be shown that this method satisfies the detailed balance property and therefore produces a reversible Markov chain with \pi(\mathbf{x}) as the stationary distribution.

If Q(\mathbf{x},\mathbf{y}) is symmetric (as is the case for the multivariate normal distribution), then one can choose \lambda(\mathbf{x},\mathbf{y})=\frac{1}{Q(\mathbf{x},\mathbf{y})} which gives w(\mathbf{x},\mathbf{y})=\pi(\mathbf{x})

See also

References

  • Liu, J. S., Liang, F. and Wong, W. H. (2000). The use of multiple-try method and local optimization in Metropolis sampling, Journal of the American Statistical Association, 95(449): 121-134 JSTOR

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Metropolis–Hastings algorithm — The Proposal distribution Q proposes the next point that the random walk might move to. In mathematics and physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples from a… …   Wikipedia

  • List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra …   Wikipedia

  • List of mathematics articles (M) — NOTOC M M estimator M group M matrix M separation M set M. C. Escher s legacy M. Riesz extension theorem M/M/1 model Maass wave form Mac Lane s planarity criterion Macaulay brackets Macbeath surface MacCormack method Macdonald polynomial Machin… …   Wikipedia

  • Markov chain Monte Carlo — MCMC redirects here. For the organization, see Malaysian Communications and Multimedia Commission. Markov chain Monte Carlo (MCMC) methods (which include random walk Monte Carlo methods) are a class of algorithms for sampling from probability… …   Wikipedia

  • literature — /lit euhr euh cheuhr, choor , li treuh /, n. 1. writings in which expression and form, in connection with ideas of permanent and universal interest, are characteristic or essential features, as poetry, novels, history, biography, and essays. 2.… …   Universalium

  • United States — a republic in the N Western Hemisphere comprising 48 conterminous states, the District of Columbia, and Alaska in North America, and Hawaii in the N Pacific. 267,954,767; conterminous United States, 3,022,387 sq. mi. (7,827,982 sq. km); with… …   Universalium

  • Smallville — This article is about the TV series. For the fictional town, see Smallville (comics). Smallville Main Title Card (season 5–10) …   Wikipedia

  • Smallville (season 2) — Smallville Season 2 DVD box cover art Country of origin United States …   Wikipedia

  • United Kingdom — a kingdom in NW Europe, consisting of Great Britain and Northern Ireland: formerly comprising Great Britain and Ireland 1801 1922. 58,610,182; 94,242 sq. mi. (244,100 sq. km). Cap.: London. Abbr.: U.K. Official name, United Kingdom of Great… …   Universalium

  • japan — japanner, n. /jeuh pan /, n., adj., v., japanned, japanning. n. 1. any of various hard, durable, black varnishes, originally from Japan, for coating wood, metal, or other surfaces. 2. work varnished and figured in the Japanese manner. 3. Japans,… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”