Bayes linear

Bayes linear

Bayes linear is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarilly developed by Michael Goldstein. Mathematically and philisophically it extends Bruno de Finetti's subjective theory of probability.

As the probability model is only partially specified in Bayes Linear it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation.

To conduct a Bayes Linear analysis it is necessary to identify some values that you expect to know shortly by making measurements "D" and some future value which you would like to know "B". Here "D" refers to a vector containing data and "B" to a vector containing quantities you would like to predict. For the following example "B" and "D" are taken to be two-dimensional vectors i.e.

:B = (Y_1,Y_2),~ D = (X_1,X_2)

In order to specify a Bayes Linear model it is necessary to supply expectations for the vectors B and D, and to also specify the correlation between each member of B and each member of D.

For example the expectations are specified as:

: E(Y_1)=5,~E(Y_2)=3,~E(X_1)=5,~E(X_2)=3

and the covariance matrix is specified as :

: egin{matrix} & X_1 & X_2 & Y_1 & Y_2 \X_1 & 1 & u & gamma & gamma \X_2 & u & 1 & gamma & gamma \Y_1 & gamma & gamma & 1 & v \Y_2 & gamma & gamma & v & 1 \end{matrix}.

The repetition in this matrix, has some interesting implications to be discussed shortly.

An adjusted expectation is a linear estimator of the form

: c_0 + c_1X_1 + c_2X_2

where c_0, c_1 and c_2 are chosen to minimise the prior expected loss for the observations i.e. Y_1, Y_2 in this case. That is for Y_1

: E( [Y_1 - c_0 - c_1X_1 - c_2X_2] ^2),

where

: c_0, c_1, c_2,

are chosen in order to minimise the prior expected loss in estimating Y_1

In general the adjusted expectation is calculated with

: E_D(X) = sum^k_{i=0} h_iD_i

Setting h_0, dots, h_k to minimise

: Eleft(left [X-sum^k_{i=0}h_iD_i ight] ^2 ight)

From a proof provided in (Goldstein and Wooff 2007) it can be shown that:

: E_D(X) = E(X) + Cov(X,D)Var(D)^{-1}(D-E(D))

For the case where Var("D") is not invertible the Moore-Penrose pseudoinverse should be used instead.

External links and references

* "Bayes Linear Statistics, Theory & Methods", Michael Goldstein, David Wooff, Wiley 2007


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Linear discriminant analysis — (LDA) and the related Fisher s linear discriminant are methods used in statistics, pattern recognition and machine learning to find a linear combination of features which characterize or separate two or more classes of objects or events. The… …   Wikipedia

  • Linear least squares — is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to measurements obtained from experiments. The goals of linear least squares are to extract predictions from the… …   Wikipedia

  • Linear least squares/Proposed — Linear least squares is an important computational problem, that arises primarily in applications when it is desired to fit a linear mathematical model to observations obtained from experiments. Mathematically, it can be stated as the problem of… …   Wikipedia

  • Bayes estimator — In decision theory and estimation theory, a Bayes estimator is an estimator or decision rule that maximizes the posterior expected value of a utility function or minimizes the posterior expected value of a loss function (also called posterior… …   Wikipedia

  • Linear classifier — In the field of machine learning, the goal of classification is to group items that have similar feature values, into groups. A linear classifier achieves this by making a classification decision based on the value of the linear combination of… …   Wikipedia

  • Linear regression — Example of simple linear regression, which has one independent variable In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one… …   Wikipedia

  • Bayes-Normalverteilungsklassifikator — Ein Bayes Klassifikator (Aussprache: [ˈbeiz], benannt nach dem englischen Mathematiker Thomas Bayes) ist ein aus dem Bayestheorem hergeleiteter Klassifikator. Er ordnet jedes Objekt der Klasse zu, zu der es mit der größten Wahrscheinlichkeit… …   Deutsch Wikipedia

  • Bayes-Klassifikator — Ein Bayes Klassifikator (Aussprache: [bɛi:z], benannt nach dem englischen Mathematiker Thomas Bayes), ist ein aus dem Bayestheorem hergeleiteter Klassifikator. Er ordnet jedes Objekt der Klasse zu, zu der es mit der größten Wahrscheinlichkeit… …   Deutsch Wikipedia

  • Linear discriminant analysis — Die Diskriminanzanalyse ist eine Methode der multivariaten Verfahren in der Statistik. Sie wurde von R. A. Fisher 1936 zum ersten Mal in The use of multiple measurements in taxonomic problems[1] beschrieben. Sie wird in der Statistik und im… …   Deutsch Wikipedia

  • Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”