 Measurement uncertainty

In metrology, measurement uncertainty is a nonnegative parameter characterizing the dispersion of the values attributed to a measured quantity. The uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity. All measurements are subject to uncertainty and a measured value is only complete if it is accompanied by a statement of the associated uncertainty. Fractional uncertainty is the measurement uncertainty divided by the measured value.
Codex has guidelines on Measurement Uncertainty, CAC/GL 542004.
Background
The purpose of measurement is to provide information about a quantity of interest  a measurand. For example, the measurand might be the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.
No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects.^{[1]} Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming that the measuring system has sufficient resolution to distinguish between the values.
The dispersion of the measured values would relate to how well the measurement is made. Their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value. The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value. However, this information would not generally be adequate.
The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were remeasured, the effect of this offset would be inherently present in the average of the values.
Random uncertainties and systematic errors
There are two types of measurement faults, systematic error and random uncertainty.
A systematic error (an estimate of which is known as a measurement bias) is associated with the fact that a measured value contains an offset. In general, a systematic error, regarded as a quantity, is a component of error that remains constant or depends in a specific manner on some other quantity.
A random uncertainty is associated with the fact that when a measurement is repeated it will generally provide a measured value that is different from the previous value. It is random in that the next measured value cannot be predicted exactly from previous such values. (If a prediction were possible, allowance for the effect could be made.)
In general, there can be a number of contributions to each type of error.
GUM approach
The Guide to the Expression of Uncertainty in Measurement (GUM)^{[2]} is a document published by the JCGM that establishes general rules for evaluating and expressing uncertainty in measurement.^{[3]}
The GUM provides a way to express the perceived quality of the result of a measurement. Rather than express the result by providing a best estimate of the measurand along with information about systematic and random error values (in the form of an "error analysis"), the GUM approach is to express the result of a measurement as a best estimate of the measurand along with an associated measurement uncertainty.
One of the basic premises of the GUM approach is that it is possible to characterize the quality of a measurement by accounting for both systematic and random errors on a comparable footing, and a method is provided for doing that. This method refines the information previously provided in an "error analysis", and puts it on a probabilistic basis through the concept of measurement uncertainty.
Another basic premise of the GUM approach is that it is not possible to state how well the true value of the measurand is known, but only how well it is believed to be known. Measurement uncertainty can therefore be described as a measure of how well one believes one knows the true value of the measurand. This uncertainty reflects the incomplete knowledge of the measurand.
The notion of "belief" is an important one, since it moves metrology into a realm where results of measurement need to be considered and quantified in terms of probabilities that express degrees of belief.
Measurement model
The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand.
There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured.
Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.
As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.
The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.
Formally, the output quantity, denoted by Y, about which information is required, is often related to input quantities, denoted by X_{1}, ... , X_{N}, about which information is available, by a measurement model in the form of a measurement function
 Y = f(X_{1}, ... , X_{N}).
A general expression for a measurement model is
 h(Y, X_{1}, ... , X_{N}) = 0.
It is taken that a procedure exists for calculating Y given X_{1}, ... , X_{N}, and that Y is uniquely defined by this equation.
Propagation of distributions
The true values of the input quantities X_{1}, ... , X_{N} are unknown. In the GUM approach, X_{1}, ... , X_{N} are characterized by probability distributions and treated mathematically as random variables. These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerning X_{1}, ... , X_{N}. Sometimes, some or all of X_{1}, ... , X_{N} are interrelated and the relevant distributions, which are known as joint, apply to these quantities taken together.
Consider estimates x_{1}, ... , x_{N}, respectively, of the input quantities X_{1}, ... , X_{N}, obtained from certificates and reports, manufacturers' specifications, the analysis of measurement data, and so on. The probability distributions characterizing X_{1}, ... , X_{N} are chosen such that the estimates x_{1}, ... , x_{N}, respectively, are the expectations^{[4]} of X_{1}, ... , X_{N}. Moreover, for the ith input quantity, consider a socalled standard uncertainty, given the symbol u(x_{i}), defined as the standard deviation^{[4]} of the input quantity X_{i}. This standard uncertainty is said to be associated with the (corresponding) estimate x_{i}. The estimate x_{i} is best in the sense that u^{2}(x_{i}) is smaller than the expected squared difference of X_{i} from any other value.
The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the X_{i} and also to Y. In the latter case, the characterizing probability distribution for Y is determined by the measurement model together with the probability distributions for the X_{i}. The determination of the probability distribution for Y from this information is known as the propagation of distributions.^{[4]}
The figure below depicts a measurement function Y = X_{1} + X_{2} in the case where X_{1} and X_{2} are each characterized by a (different) rectangular, or uniform, probability distribution. Y has a symmetric trapezoidal probability distribution in this case.
Once the input quantities X_{1}, ... , X_{N} have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurand Y is fully specified in terms of this information. In particular, the expectation of Y is used as the estimate of Y, and the standard deviation of Y as the standard uncertainty associated with this estimate.
Often an interval containing Y with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution for Y. The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability.
Prior knowledge about the true value of the output quantity Y can also be considered. For the domestic bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution for Y that can give a smaller standard deviation for Y and hence a smaller standard uncertainty associated with the estimate of Y.^{[5]}^{[6]}^{[7]}
Type A and Type B evaluation of uncertainty
Knowledge about an input quantity X_{i} is inferred from repeated measured values (Type A evaluation of uncertainty), or scientific judgement or other information concerning the possible values of the quantity (Type B evaluation of uncertainty).
In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity X given repeated measured values of it (obtained independently) is a Gaussian distribution. X then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average. When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as a tdistribution.^{[8]} Other considerations apply when the measured values are not obtained independently.
For a Type B evaluation of uncertainty, often the only available information is that X lies in a specified interval [a, b]. In such a case, knowledge of the quantity can be characterized by a rectangular probability distribution^{[8]} with limits a and b. If different information were available, a probability distribution consistent with that information would be used.^{[9]}
Sensitivity coefficients
Main article: Sensitivity analysisSensitivity coefficients c_{1}, ... , c_{N} describe how the estimate y of Y would be influenced by small changes in the estimates x_{1},..., x_{N} of the input quantities X_{1}, ... , X_{N}. For the measurement function Y = f(X_{1}, ... , X_{N}), the sensitivity coefficient c_{i} equals the partial derivative of first order of f with respect to X_{i} evaluated at X_{1} = x_{1}, X_{2} = x_{2}, etc. For a linear measurement function
 ,
with X_{1}, ... , X_{N} independent, a change in x_{i} equal to u(x_{i}) would give a change c_{i}u(x_{i}) in y. This statement would generally be approximate for measurement functions Y = f(X_{1}, ... , X_{N}). The relative magnitudes of the terms  c_{i}  u(x_{i}) are useful in assessing the respective contributions from the input quantities to the standard uncertainty u(y) associated with y.
The standard uncertainty u(y) associated with the estimate y of the output quantity Y is not given by the sum of the  c_{i}  u(x_{i}), but these terms combined in quadrature,^{[2]} namely by (an expression that is generally approximate for measurement functions Y = f(X_{1}, ... , X_{N}))
 ,
which is known as the law of propagation of uncertainty.
When the input quantities X_{i} contain dependencies, the above formula is augmented by terms containing covariances,^{[2]} which may increase or decrease u(y).
Stages of uncertainty evaluation
The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing. The formulation stage constitutes
 defining the output quantity Y (the measurand),
 identifying the input quantities on which Y depends,
 developing a measurement model relating Y to the input quantities, and
 on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).
The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity Y, and summarizing by using this distribution to obtain
 the expectation of Y, taken as an estimate y of Y,
 the standard deviation of Y, taken as the standard uncertainty u(y) associated with y, and
 a coverage interval containing Y with a specified coverage probability.
The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including
 the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity Y by a Gaussian or a tdistribution,
 analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for Y, and
 a Monte Carlo method,^{[4]} in which an approximation to the distribution function for Y is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.
For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.
Joint Committee for Guides in Metrology
In 1997 a Joint Committee for Guides in Metrology (JCGM), chaired by the Director of the BIPM, was created by the seven international organizations that had originally in 1993 prepared the "Guide to the expression of uncertainty in measurement" (GUM) and the "International vocabulary of metrology – basic and general concepts and associated terms" (VIM). The JCGM assumed responsibility for these two documents from the ISO Technical Advisory Group 4 (TAG4).
The Joint Committee is formed by the BIPM with the International Electrotechnical Commission (IEC), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), the International Laboratory Accreditation Cooperation (ILAC), the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry (IUPAC), the International Union of Pure and Applied Physics (IUPAP), and the International Organization of Legal Metrology (OIML).
JCGM has two Working Groups. Working Group 1, "Expression of uncertainty in measurement", has the task to promote the use of the GUM and to prepare Supplements and other documents for its broad application. Working Group 2, "Working Group on International vocabulary of basic and general terms in metrology (VIM)", has the task to revise and promote the use of the VIM. For further information on the activity of the JCGM, see www.bipm.org.
Revision by Working Group 1 of the GUM itself has started in parallel with work on preparing documents in a series of JCGM documents under the generic heading Evaluation of measurement data. The parts in the series are
 JCGM 100:2008. Evaluation of measurement data — Guide to the expression of uncertainty in measurement (GUM),
 JCGM 101:2008. Evaluation of measurement data – Supplement 1 to the "Guide to the expression of uncertainty in measurement" – Propagation of distributions using a Monte Carlo method,
 JCGM 102. Evaluation of measurement data – Supplement 2 to the "Guide to the expression of uncertainty in measurement" – Models with any number of output quantities,
 JCGM 103. Evaluation of measurement data – Supplement 3 to the "Guide to the expression of uncertainty in measurement" – Modelling,
 JCGM 104:2000. Evaluation of measurement data – An introduction to the "Guide to the expression of uncertainty in measurement" and related documents,
 JCGM 105. Evaluation of measurement data – Concepts and basic principles,
 JCGM 106. Evaluation of measurement data – The role of measurement uncertainty in conformity assessment, and
 JCGM 107. Evaluation of measurement data – Applications of the leastsquares method.
Alternative Perspective
Most of this article represents the most common view of measurement uncertainty, which assumes that random variables are proper mathematical models for uncertain quantities and simple probability distributions are sufficient for representing all forms of measurement uncertainties. In some situations, however, a mathematical interval rather than a probability distribution might be a better model of uncertainty. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plusminus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measuresments are completely independent.
A more robust representation of measurement uncertainty in such cases can be fashioned from intervals.^{[10]}^{[11]} An interval [a,b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a+b)/2, b] with probability one half, and within any subinterval of [a,b] with probability equal to the width of the subinterval divided by b–a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and DempsterShafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.
See also
 Metrology
 Experimental uncertainty analysis
 Test method
 Uncertainty
 Confidence interval
 Propagation of uncertainty
 List of uncertainty propagation software
Further reading
 JCGM 200:2008. International Vocabulary of Metrology  Basic and general concepts and associated terms, 3rd Edition. Joint Committee for Guides in Metrology.
 ISO 35341:2006. Statistics  Vocabulary and symbols  Part 1: General statistical terms and terms used in probability.
 Bell, S. Measurement Good Practice Guide No. 11. A Beginner's Guide to Uncertainty of Measurement.Technical report, National Physical Laboratory, 1999.
 Cox, M. G., and Harris, P. M. SSfM Best Practice Guide No. 6, Uncertainty evaluation. Technical report DEMES011, National Physical Laboratory, 2006.
 Cox, M. G., and Harris, P. M. Software specifications for uncertainty evaluation. Technical report DEMES010, National Physical Laboratory, 2006.
 http://www.springer.com/physics/book/9783540209447 Grabe, M., Measurement Uncertainties in Science and Technology, Springer 2005
 http://www.springer.com/physics/book/9783642033049 Grabe, M. Generalized Gaussian Error Calculus, Springer 2010
 Dietrich, C. F. Uncertainty, Calibration and Probability. Adam Hilger, Bristol, UK, 1991.
 NIST. Uncertainty of measurement results.
 Bich, W., Cox, M. G., and Harris, P. M. Evolution of the "Guide to the Expression of Uncertainty in Measurement". Metrologia, 43(4):S161–S166, 2006.
 EA. Expression of the uncertainty of measurement in calibration. Technical Report EA4/02, European Cooperation for Accreditation, 1999.
 Elster, C., and Toman, B. Bayesian uncertainty analysis under prior ignorance of the measurand versus analysis using Supplement 1 to the Guide: a comparison. Metrologia, 46:261266, 2009.
 Ferson, S., Kreinovich, V., Hajagos, J., Oberkampf, W., and Ginzburg, L. 2007. "Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty". SAND20070939.
 Lira., I. Evaluating the Uncertainty of Measurement. Fundamentals and Practical Guidance. Institute of Physics, Bristol, UK, 2002.
 Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1, 2010; ISBN 9789279120213.
 UKAS. The expression of uncertainty in EMC testing. Technical Report LAB34, United Kingdom Accreditation Service, 2002.
 NPLUnc
 Estimate of temperature and its uncertainty in small systems, 2011. [1]
References
 ^ Bell, S. Measurement Good Practice Guide No. 11. A Beginner's Guide to Uncertainty of Measurement. Tech. rep., National Physical Laboratory, 1999.
 ^ ^{a} ^{b} ^{c} JCGM 100:2008. Evaluation of measurement data  Guide to the expression of uncertainty in measurement, Joint Committee for Guides in Metrology.
 ^ Kent J. Gregory, Giovani Bibbo, and John E. Pattison (2005), A Standard Approach to Measurement Uncertainties for Scientists and Engineers in Medicine, Australasian Physical and Engineering Sciences in Medicine 28(2):131139.
 ^ ^{a} ^{b} ^{c} ^{d} JCGM 101:2008. Evaluation of measurement data  Supplement 1 to the "Guide to the expression of uncertainty in measurement"  Propagation of distributions using a Monte Carlo method. Joint Committee for Guides in Metrology.
 ^ Bernardo, J., and Smith, A. Bayesian Theory. John Wiley & Sons, New York, USA, 2000. 3.20
 ^ Elster, C. Calculation of uncertainty in the presence of prior knowledge. Metrologia 44 (2007), 111116. 3.20
 ^ EURACHEM/CITAC. Quantifying uncertainty in analytical measurement. Tech. Rep. Guide CG4, EURACHEM/CITEC,[EURACHEM/CITAC Guide], 2000. Second edition.
 ^ ^{a} ^{b} JCGM 104:2009. Evaluation of measurement data  An introduction to the "Guide to the expression of uncertainty in measurement" and related documents. Joint Committee for Guides in Metrology.
 ^ Weise, K., and Wöger, W. A Bayesian theory of measurement uncertainty. Meas. Sci. Technol. 3 (1992), 111, 4.8.
 ^ Manski, C.F. (2003); Partial Identification of Probability Distributions, Springer Series in Statistics, Springer, New York
 ^ Ferson, S., V. Kreinovich, J. Hajagos, W. Oberkampf, and L. Ginzburg (2007); Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty, Sandia National Laboratories SAND 20070939
Categories: Measurement
 Uncertainty of numbers
Wikimedia Foundation. 2010.
Look at other dictionaries:
Uncertainty — For the film of the same name, see Uncertainty (film). Certainty series Agnosticism Belief Certainty Doubt Determinism Epistemology … Wikipedia
Measurement systems analysis — A Measurement System Analysis (MSA) is a specially designed experiment that seeks to identify the components of variation in the measurement. Just as processes that produce a product may vary, the process of obtaining measurements and data may… … Wikipedia
Measurement — For bust/waist/hip measurement, see BWH. A typical tape measure with both metric and US units and two US pennies for comparison Measurement is the process or the result of determining the ratio of a physical quantity, such as a length, time,… … Wikipedia
Measurement Systems Analysis — A Measurement System Analysis, abbreviated MSA, is a specially designed experiment that seeks to identify the components of variation in the measurement.Just as processes that produce a product may vary, the process of obtaining measurements and… … Wikipedia
Uncertainty principle — In quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the momentum of the particle uncertain; and conversely, that measuring the momentum of a particle precisely makes the… … Wikipedia
uncertainty principle — Physics. the principle of quantum mechanics, formulated by Heisenberg, that the accurate measurement of one of two related, observable quantities, as position and momentum or energy and time, produces uncertainties in the measurement of the other … Universalium
Measurement in quantum mechanics — Quantum mechanics Uncertainty principle … Wikipedia
Measurement microphone calibration — In order to take a scientific measurement with a microphone, its precise sensitivity must be known (in volts per Pascal). Since this may change over the lifetime of the device, it is necessary to regularly calibrate measurement microphones. This… … Wikipedia
Uncertainty analysis — In physical experiments uncertainty analysis or experimental uncertainty assessment deals with assessing the uncertainty in a measurement. An experiment aimed to determine an effect, demostrate a law, or determine the numerical value of a… … Wikipedia
uncertainty of measurement — matavimo neapibrėžtis statusas T sritis Standartizacija ir metrologija apibrėžtis Su matavimo rezultatu susijęs parametras, apibūdinantis sklaidą verčių, kurias pagrįstai galima būtų priskirti matuojamajam dydžiui. atitikmenys: angl. uncertainty… … Penkiakalbis aiškinamasis metrologijos terminų žodynas