Stein's unbiased risk estimate

Stein's unbiased risk estimate

In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter, and thus cannot be determined completely.

The technique is named after its discoverer, Charles Stein. cite journal|title=Estimation of the Mean of a Multivariate Normal Distribution|journal=The Annals of Statistics|date=Nov. 1981|first=Charles M.|last=Stein|coauthors=|volume=9|issue=6|pages=1135–1151|id= |url=|accessdate=2008-03-30|month=Nov|year=1981|doi=10.1214/aos/1176345632 ]

Formal statement

Let heta in {mathbb R}^n be an unknown deterministic parameter and let x be a measurement vector which is distributed normally with mean heta and covariance sigma^2 I. Suppose h(x) is an estimator of heta from x. Then, Stein's unbiased risk estimate is given by:mathrm{SURE}(h) = | heta|^2 + |h(x)|^2 + 2 sigma^2 sum_{i=1}^n frac{partial h_i}{partial x_i} - 2 sigma^2 sum_{i=1}^n x_i h_i(x)where h_i(x) is the ith component of the estimate, and |cdot| is the Euclidean norm.

The importance of SURE is that it is an unbiased estimate of the mean-squared error (or squared error risk) of h(x), i.e.:E { mathrm{SURE}(h) } = mathrm{MSE}(h).,!

Thus, minimizing SURE can be expected to minimize the MSE. Except for the first term in SURE, which is identical for all estimators, there is no dependence on the unknown parameter heta in the expression for SURE above. Thus, it can be manipulated (e.g., to determine optimal estimation settings) without knowledge of heta.


A standard application of SURE is to choose a parametric form for an estimator, and then optimize the values of the parameters to minimize the risk estimate. This technique has been applied in several settings. For example, a variant of the James-Stein estimator can be derived by finding the optimal shrinkage estimator. The technique has also been used by Donoho and Johnstone to determine the optimal shrinkage factor in a wavelet denoising setting. [ cite journal|title=Adapting to Unknown Smoothness via Wavelet Shrinkage|journal=Journal of the American Statistical Association|date=Dec. 1995|first=David L.|last=Donoho|coauthors=Iain M. Johnstone|volume=90|issue=432|pages=1200–1244|id= |url=|accessdate=2008-03-30|month=Dec|year=1995|doi=10.2307/2291512 ]


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Stein's example — Stein s example, sometimes referred to as Stein s phenomenon or Stein s paradox, is a surprising effect observed in decision theory and estimation theory. Simply stated, the example demonstrates that when three or more parameters are estimated… …   Wikipedia

  • Charles Stein (statistician) — For other people named Charles Stein, see Charles Stein (disambiguation). Charles M. Stein (born March 22, 1920), an American mathematical statistician, is emeritus professor of statistics at Stanford University. He received his Ph.D in 1947 at… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • List of mathematics articles (S) — NOTOC S S duality S matrix S plane S transform S unit S.O.S. Mathematics SA subgroup Saccheri quadrilateral Sacks spiral Sacred geometry Saddle node bifurcation Saddle point Saddle surface Sadleirian Professor of Pure Mathematics Safe prime Safe… …   Wikipedia

  • Sure — or SURE may refer to: * sure as probability, see certainty * Sure (brand), the brand by Unilever * Sure, a telephone company operating in the British Crown dependencies * Sure, a Chilean based film company * Stein s unbiased risk estimate (SURE) …   Wikipedia

  • Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia

  • Bayes estimator — In decision theory and estimation theory, a Bayes estimator is an estimator or decision rule that maximizes the posterior expected value of a utility function or minimizes the posterior expected value of a loss function (also called posterior… …   Wikipedia

  • Regression toward the mean — In statistics, regression toward the mean (also known as regression to the mean) is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on a second measurement, and a fact that may… …   Wikipedia

  • Kashmir — This article is about the geographical region of greater Kashmir. For other meanings, see Kashmir (disambiguation), or Cashmere. Kashmir (Balti: کشمیر; Poonchi/Chibhali: کشمیر; Dogri: कश्मीर; Kashmiri: कॅशीर, کٔشِیر; Shina: کشمیر; Uyghur:… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.