Effect size

Effect size

In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity. An effect size calculated from data is a descriptive statistic that conveys the estimated magnitude of a relationship without making any statement about whether the apparent relationship in the data reflects a true relationship in the population. In that way, effect sizes complement inferential statistics such as p-values. Among other uses, effect size measures play an important role in meta-analysis studies that summarize findings from a specific area of research, and in statistical power analyses.

The concept of effect size appears already in everyday language. For example, a weight loss program may boast that it leads to an average weight loss of 30 pounds. In this case, 30 pounds is an indicator of the claimed effect size. Another example is that a tutoring program may claim that it raises school performance by one letter grade. This grade increase is the claimed effect size of the program. These are both examples of "absolute effect sizes," meaning that they convey the average difference between two groups without any discussion of the variability within the groups. For example, if the weight loss program results in an average loss of 30 pounds, it is possible that every participant loses exactly 30 pounds, or half the participants lose 60 pounds and half lose no weight at all.

Reporting effect sizes is considered good practice when presenting empirical research findings in many fields.[1][2] The reporting of effect sizes facilitates the interpretation of the substantive, as opposed to the statistical, significance of a research result.[3] Effect sizes are particularly prominent in social and medical research. Relative and absolute measures of effect size convey different information, and can be used complementarily. A prominent task force in the psychology research community expressed the following recommendation:

Always present effect sizes for primary outcomes...If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).

L. Wilkinson and APA Task Force on Statistical Inference (1999, p. 599)

Contents

Overview

Population and sample effect sizes

The term effect size can refer to a statistic calculated from a sample of data, or to a parameter of a hypothetical statistical population. Conventions for distinguishing sample from population effect sizes follow standard statistical practices — one common approach is to use Greek letters like ρ to denote population parameters and Latin letters like r to denote the corresponding statistic; alternatively, a "hat" can be placed over the population parameter to denote the statistic, e.g. with ρ̂ being the estimate of the parameter ρ.

As in any statistical setting, effect sizes are estimated with error, and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data were sampled and the manner in which the measurements were made. An example of this is publication bias, which occurs when scientists only report results when the estimated effect sizes are large or are statistically significant. As a result, if many researchers are carrying out studies under low statistical power, the reported results are biased to be stronger than true effects, if any.[4] Another example, where effect sizes may be distorted is in a multiple trial experiment where the effect size calculation is based on the averaged or aggregated response across the trials.[5]

Relationship to test statistics

Sample-based effect sizes are distinguished from test statistics used in hypothesis testing, in that they estimate the strength of an apparent relationship, rather than assigning a significance level reflecting whether the relationship could be due to chance. The effect size does not determine the significance level, or vice-versa. Given a sufficiently large sample size, a statistical comparison will always show a significant difference unless the population effect size is exactly zero. For example, a sample Pearson correlation coefficient of 0.1 is strongly statistically significant if the sample size is 1000. Reporting only the significant p-value from this analysis could be misleading if a correlation of 0.1 is too small to be of interest in a particular application.

Standardized and unstandardized effect sizes

The term effect size can refer to a standardized measures of effect (such as r, Cohen's d, and odds ratio), or to an unstandardized measure (e.g., the raw difference between group means and unstandardized regression coefficients). Standardized effect size measures are typically used when the metrics of variables being studied do not have intrinsic meaning (e.g., a score on a personality test on an arbitrary scale), when results from multiple studies are being combined when some or all of the studies use different scales, or when it is desired to convey the size of an effect relative to the variability in the population. In meta-analysis, standardized effect sizes are used as a common measure that can be calculated for different studies and then combined into an overall summary.

Types

Pearson r correlation

Pearson's correlation, often denoted r and introduced by Karl Pearson, is widely used as an effect size when paired quantitative data are available; for instance if one were studying the relationship between birth weight and longevity. The correlation coefficient can also be used when the data are binary. Pearson's r can vary in magnitude from −1 to 1, with −1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables. Cohen gives the following guidelines for the social sciences: small effect size, r = 0.1 − 0.23; medium, r = 0.24 − 0.36; large, r = 0.37 or larger.[6][7]

A related effect size is the coefficient of determination (the square of r, referred to as "r-squared"). In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. An r of 0.21, this makes the coefficient of determination .0441, means that 4.4% of the variance of either variable is shared with the other variable. The r2 is positive, so does not convey the polarity of the relationship between the two variables.

Effect sizes based on means

A (population) effect size θ based on means usually considers the standardized mean difference between two populations[8]:78

\theta = \frac{\mu_1 - \mu_2}{\sigma},

where μ1 is the mean for one population, μ2 is the mean for the other population, and σ is a standard deviation based on either or both populations.

In the practical setting the population values are typically not known and must be estimated from sample statistics. The several versions of effect sizes based on means differ with respect to which statistics are used.

This form for the effect size resembles the computation for a t-test statistic, with the critical difference that the t-test statistic includes a factor of \sqrt{n}. This means that for a given effect size, the significance level increases with the sample size. Unlike the t-test statistic, the effect size aims to estimate a population parameter, so is not affected by the sample size.

Cohen's d

Cohen's d is defined as the difference between two means divided by a standard deviation for the data

d = \frac{\bar{x}_1 - \bar{x}_2}{s},

Cohen's d is frequently used in estimating sample sizes. A lower Cohen's d indicates a necessity of larger sample sizes, and vice versa, as can subsequently be determined together with the additional parameters of desired significance level and statistical power.[9]

What precisely the standard deviation s is was not originally made explicit by Jacob Cohen because he defined it (using the symbol "σ") as "the standard deviation of either population (since they are assumed equal)".[6]:20 Other authors make the computation of the standard deviation more explicit with the following definition for a pooled standard deviation[10]:14 with two independent samples.

s = \sqrt{\frac{(n_1-1)s^2_1 + (n_2-1)s^2_2}{n_1+n_2} },
s_1^2 = \frac{1}{n_1-1} \sum_{i=1}^{n_1} (x_{1,i} - \bar{x}_1)^2

This definition of "Cohen's d" is termed the maximum likelihood estimator by Hedges and Olkin,[8] and it is related to Hedges' g (see below) by a scaling[8]:82

g = \sqrt{\frac{n_1+n_2-2}{n_1+n_2}} d

Glass's Δ

In 1976 Gene V. Glass proposed an estimator of the effect size that uses only the standard deviation of the second group[8]:78

\Delta = \frac{\bar{x}_1 - \bar{x}_2}{s_2}

The second group may be regarded as a control group, and Glass argued that if several treatments were compared to the control group it would be better to use just the standard deviation computed from the control group, so that effect sizes would not differ under equal means and different variances.

Under an assumption of equal population variances a pooled estimate for σ is more precise.

Hedges' g

Hedges' g, suggested by Larry Hedges in 1981,[11] is like the other measures based on a standardized difference[8]:79

g = \frac{\bar{x}_1 - \bar{x}_2}{s^*}

but its pooled standard deviation s * is computed slightly differently from Cohen's d

s^* = \sqrt{\frac{(n_1-1)s_1^2 + (n_2-1)s_2^2}{n_1+n_2-2}}.

As an estimator for the population effect size θ it is biased. However, this bias can be corrected for by multiplication with a factor

g^* = J(n_1+n_2-2) g \approx \left(1-\frac{3}{4(n_1+n_2)-9}\right) g

Hedges and Olkin refer to this unbiased estimator g * as d,[8] but it is not the same as Cohen's d. The exact form for the correction factor J() involves the gamma function[8]:104

J(a) = \frac{\Gamma(a/2)}{\sqrt{a/2}\Gamma((a-1)/2)}.

Distribution of effect sizes based on means

Provided that the data is Gaussian distributed a scaled Hedges' g, \sqrt{(n_1 n_2/(n_1+n_2)}\,g, follows a noncentral t-distribution with the noncentrality parameter \sqrt{(n_1 n_2/(n_1+n_2)}\theta and n1 + n2 − 2 degrees of freedom. Likewise, the scaled Glass' Δ is distributed with n2 − 1 degrees of freedom.

From the distribution it is possible to compute the expectation and variance of the effect sizes.

In some cases large sample approximations for the variance are used. One suggestion for the variance of Hedges' unbiased estimator is[8]:86

\hat{\sigma}^2(g^*) = \frac{n_1+n_2}{n_1 n_2} + \frac{(g^*)^2}{2(n_1 + n_2)}.

Cohen's ƒ2

Cohen's ƒ2 is one of several appropriate effect size measures to use in the context of an F-test for ANOVA or multiple regression. Unfortunately, it estimates for the sample rather than the population and is biased (overestimates effect size for the ANOVA). An unbiased estimator for ANOVA would be Omega squared, which estimates for the population.

The ƒ2 effect size measure for multiple regression is defined as:

f^2 = {R^2 \over 1 - R^2}
where R2 is the squared multiple correlation.

The f2 effect size measure for hierarchical multiple regression is defined as:

f^2 = {R^2_{AB} - R^2_A \over 1 - R^2_{AB}}
where R2A is the variance accounted for by a set of one or more independent variables A, and R2AB is the combined variance accounted for by A and another set of one or more independent variables B.

By convention, ƒ2A effect sizes of 0.02, 0.15, and 0.35 are termed small, medium, and large, respectively.[6]

Cohen's \hat{f} can also be found for factorial analysis of variance (ANOVA, aka the F-test) working backwards using :

\hat{f}_\text{effect} = {\sqrt{(df_\text{effect}/N) (F_\text{effect}-1)}}.

In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter of f2 is

{SS(\mu_1,\mu_2,\dots,\mu_K)}\over{K \times \sigma^2},

wherein μj denotes the population mean within the jth group of the total K groups, and σ the equivalent population standard deviations within each groups. SS is the sum of squares manipulation in ANOVA.

ω2

A more unbiased estimator of the variance explained in the population is omega-squared[12][13][14]

{\hat\omega}^2 = \frac{SS_\text{treatment}-df_\text{treatment} * MS_\text{error}}{SS_\text{total} + MS_\text{error}} .

This form of the formula is limited to between-subjects analysis with equal sample sizes in all cells,[14]. Since it is unbiased, ω2 is preferable to Cohen's ƒ2; however, it can be more inconvenient to calculate for complex analyses. A generalized form of the estimator has been published for between-subjects and within-subjects analysis, repeated measure, mixed design, and randomized block design experiments.[15] In addition, methods to calculate partial Omega2 for individual factors and combined factors in designs with up to three independent variables have been published.[15]

φ, Cramér's φ, or Cramér's V

 \phi = \sqrt{ \frac{\chi^2}{N}} 

 \phi_c = \sqrt{ \frac{\chi^2}{N(k - 1)}} 

Phi (φ) Cramér's Phi (φc)

The best measure of association for the chi-squared test is phi (or Cramér's phi or V). Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 x 2).[16] Cramér's Phi may be used with variables having more than two levels.

Phi can be computed by finding the square root of the chi-squared statistic divided by the sample size.

Similarly, Cramér's phi is computed by taking the square root of the chi-squared statistic divided by the sample size and the length of the minimum dimension (k is the smaller of the number of rows r or columns c).

φc is the intercorrelation of the two discrete variables[17] and may be computed for any value of r or c. However, as chi-squared values tend to increase with the number of cells, the greater the difference between r and c, the more likely φc will tend to 1 without strong evidence of a meaningful correlation.

Cramér's phi may also be applied to 'goodness of fit' chi-squared models (i.e. those where c=1). In this case it functions as a measure of tendency towards a single outcome (i.e. out of k outcomes).

Odds ratio

The odds ratio (OR) is another useful effect size. It is appropriate when both variables are binary. For example, consider a study on spelling. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or more briefly 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. However, odds ratio statistics are on a different scale to Cohen's d. So, this '3' is not comparable to a Cohen's d of 3.

Relative risk

The relative risk (RR), also called risk ratio, is simply the risk (probability) of an event relative to some independent variable. This measure of effect size differs from the odds ratio in that it compares probabilities instead of odds, but asymptotically approaches the latter for small probabilities. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively. The effect size can be computed the same as above, but using the probabilities instead. Therefore, the relative risk is 1.28. Since rather large probabilities of passing were used, there is a large difference between relative risk and odds ratio. Had failure (a smaller probability) been used as the event (rather than passing), the difference between the two measures of effect size would not be so great.

While both measures are useful, they have different statistical uses. In medical research, the odds ratio is commonly used for case-control studies, as odds, but not probabilities, are usually estimated.[18] Relative risk is commonly used in randomized controlled trials and cohort studies.[19] When the incidence of outcomes are rare in the study population (generally interpreted to mean less than 10%), the odds ratio is considered a good estimate of the risk ratio. However, as outcomes become more common, the odds ratio and risk ratio diverge, with the odds ratio overestimating or underestimating the risk ratio when the estimates are greater than or less than 1, respectively. When estimates of the incidence of outcomes are available, methods exist to convert odds ratios to risk ratios.[20][21]

Confidence intervals by means of noncentrality parameters

Confidence intervals of unstandardized effect sizes like difference of means 1 − μ2) can be found in common statistics textbooks and software, while confidence intervals of standardized effect sizes, especially Cohen's \tilde{d}:=\frac{\mu_1-\mu_2}{\sigma} and \tilde{f}^2:=\frac{SS(\mu_1,\mu_2,...,\mu_K)}{K \cdot \sigma^2}, rely on the calculation of confidence intervals of noncentrality parameters (ncp). A common approach to construct (1 − α) the confidence interval of ncp is to find the critical ncp values to fit the observed statistic to tail quantiles α/2 and (1 − α/2). SAS and R-package MBESS provide functions for critical values of ncp. See also an online calculator with an interactive interface.

T test for mean difference of single group or two related groups

In case of single group, M (μ) denotes the sample (population) mean of single group, and SD (σ) denotes the sample (population) standard deviation. N is the sample size of the group. T test is used for the hypothesis on the difference between mean and a baseline μbaseline. Usually, μbaseline is zero, while not necessary. In case of two related groups, the single group is constructed by difference in each pair of samples, while SD (σ) denotes the sample (population) standard deviation of differences rather than within original two groups.

t:=\frac{M}{SD/\sqrt{N}}=\frac{\sqrt{N}\frac{M-\mu}{\sigma} + \sqrt{N}\frac{\mu-\mu_\text{baseline}}{\sigma}}{\frac{SD}{\sigma}}
ncp=\sqrt{N}\frac{\mu-\mu_\text{baseline}}{\sigma}

and Cohen's

d:=\frac{M-\mu_\text{baseline}}{SD}

is the point estimate of

\frac{\mu-\mu_\text{baseline}}{\sigma}.

So,

\tilde{d}=\frac{ncp}{\sqrt{N}}.

T test for mean difference between two independent groups

n1 or n2 is sample size within the respective group.

t:=\frac{M_1-M_2}{SD_\text{within}/\sqrt{\frac{n_1 n_2}{n_1+n_2}}},

wherein

SD_\text{within}:=\sqrt{\frac{SS_\text{within}}{df_\text{within}}}=\sqrt{\frac{(n_1-1)SD_1^2+(n_2-1)SD_2^2}{n_1+n_2-2}}.
ncp=\sqrt{\frac{n_1 n_2}{n_1+n_2}}\frac{\mu_1-\mu_2}{\sigma}

and Cohen's

d:=\frac{M_1-M_2}{SD_\text{within}} is the point estimate of \frac{\mu_1-\mu_2}{\sigma}.

So,

\tilde{d}=\frac{ncp}{\sqrt{\frac{n_1 n_2}{n_1+n_2}}}.

One-way ANOVA test for mean difference across multiple independent groups

One-way ANOVA test applies noncentral F distribution. While with a given population standard deviation σ, the same test question applies noncentral chi-squared distribution.

F:=\frac{\frac{SS_\text{between}}{\sigma^2}/df_\text{between}}{\frac{SS_\text{within}}{\sigma^2}/df_\text{within}}

For each j-th sample within i-th group Xi,j, denote

M_i \left(X_{i,j}\right) := \frac{\sum_{w=1}^{n_{i}}X_{i,w}}{n_{i}};\; \mu_i \left(X_{i,j}\right) := \mu_i.

While,

\begin{array}{ll}
 & SS_\text{between}/\sigma^{2}\\
= & \frac{SS\left(M_{i}\left(X_{i,j}\right);i=1,2,\dots,K,\; j=1,2,\dots,n_{i}\right)}{\sigma^{2}}\\
= & SS\left(\frac{M_{i}\left(X_{i,j}-\mu_{i}\right)}{\sigma}+\frac{\mu_{i}}{\sigma};i=1,2,\dots,K,\; j=1,2,\dots,n_{i}\right)\\
\sim & \chi^{2}\left(df=K-1,\; ncp=SS\left(\frac{\mu_i\left(X_{i,j}\right)}{\sigma};i=1,2,\dots,K,\; j=1,2,\dots,n_{i}\right)\right)\end{array}

So, both ncp(s) of F and χ2 equate

SS\left(\mu_i(X_{i,j})/\sigma;i=1,2,\dots,K,\; j=1,2,\dots,n_i \right).

In case of n:=n_1=n_2=\cdots=n_K for K independent groups of same size, the total sample size is N := n·K.

\text{Cohens }\tilde{f}^2 := \frac{SS(\mu_1,\mu_2, \dots ,\mu_K)}{K\cdot\sigma^{2}} = \frac{SS\left(\mu_i\left(X_{i,j}\right)/\sigma;i=1,2,\dots,K,\; j=1,2,\dots,n_i \right)}{n\cdot K} = \frac{ncp}{n\cdot K}=\frac{ncp}N.

T-test of pair of independent groups is a special case of one-way ANOVA. Note that noncentrality parameter ncpF of F is not comparable to the noncentrality parameter ncpt of the corresponding t. Actually, ncp_F=ncp_t^2, and \tilde{f}=\left|\frac{\tilde{d}}{2}\right| in the case.

"Small", "medium", "large"

Some fields using effect sizes apply words such as "small", "medium" and "large" to the size of the effect. Whether an effect size should be interpreted small, medium, or large depends on its substantial context and its operational definition. Cohen's conventional criteria small, medium, or big[6] are near ubiquitous across many fields. Power analysis or sample size planning requires an assumed population parameter of effect sizes. Many researchers adopt Cohen's standards as default alternative hypotheses. Russell Lenth criticized them as T-shirt effect sizes[22]

This is an elaborate way to arrive at the same sample size that has been used in past social science studies of large, medium, and small size (respectively). The method uses a standardized effect size as the goal. Think about it: for a "medium" effect size, you'll choose the same n regardless of the accuracy or reliability of your instrument, or the narrowness or diversity of your subjects. Clearly, important considerations are being ignored here. "Medium" is definitely not the message!

For Cohen's d an effect size of 0.2 to 0.3 might be a "small" effect, around 0.5 a "medium" effect and 0.8 to infinity, a "large" effect.[6]:25 (But note that the d might be larger than one)

Cohen's text[6] anticipates Lenth's concerns:

"The terms 'small,' 'medium,' and 'large' are relative, not only to each other, but to the area of behavioral science or even more particularly to the specific content and research method being employed in any given investigation....In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions for these terms for use in power analysis in as diverse a field of inquiry as behavioral science. This risk is nevertheless accepted in the belief that more is to be gained than lost by supplying a common conventional frame of reference which is recommended for use only when no better basis for estimating the ES index is available." (p. 25)

In an ideal world, researchers would interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge. Where this is problematic, Cohen's effect size criteria may serve as a last resort.[3]

See also

  • Z-factor, an alternative measure of effect size

References

  1. ^ Wilkinson, Leland; APA Task Force on Statistical Inference (1999). "Statistical methods in psychology journals: Guidelines and explanations". American Psychologist 54: 594–604. doi:10.1037/0003-066X.54.8.594. 
  2. ^ Nakagawa, Shinichi; Cuthill, Innes C (2007). "Effect size, confidence interval and statistical significance: a practical guide for biologists". Biological Reviews Cambridge Philosophical Society 82 (4): 591–605. doi:10.1111/j.1469-185X.2007.00027.x. PMID 17944619. 
  3. ^ a b Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results. United Kingdom: Cambridge University Press. 
  4. ^ Brand A, Bradley MT, Best LA, Stoica G (2008). "Accuracy of effect size estimates from published psychological research". Perceptual and Motor Skills 106 (2): 645–649. doi:10.2466/PMS.106.2.645-649. PMID 18556917. http://mtbradley.com/brandbradelybeststoicapdf.pdf. 
  5. ^ Brand A, Bradley MT, Best LA, Stoica G (2011). "Multiple trials may yield exaggerated effect size estimates". The Journal of General Psychology 138 (1): 1–11. doi:10.1080/00221309.2010.520360. http://www.ipsychexpts.com/brand_et_al_(2011).pdf. 
  6. ^ a b c d e f Jacob Cohen (1988). Statistical Power Analysis for the Behavioral Sciences (second ed.). Lawrence Erlbaum Associates. 
  7. ^ Cohen, J (1992). "A power primer". Psychological Bulletin 112: 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683. 
  8. ^ a b c d e f g h Larry V. Hedges & Ingram Olkin (1985). Statistical Methods for Meta-Analysis. Orlando: Academic Press. ISBN 0-12-336380-2. 
  9. ^ Chapter 13, page 215, in: Kenny, David A. (1987). Statistics for the social and behavioral sciences. Boston: Little, Brown. ISBN 0-316-48915-8. 
  10. ^ Joachim Hartung, Guido Knapp & Bimal K. Sinha (2008). Statistical Meta-Analysis with Application. Hoboken, New Jersey: Wiley. 
  11. ^ Larry V. Hedges (1981). "Distribution theory for Glass's estimator of effect size and related estimators". Journal of Educational Statistics 6 (2): 107–128. doi:10.3102/10769986006002107. 
  12. ^ Bortz, 1999[Full citation needed], p. 269f.;
  13. ^ Bühner & Ziegler[Full citation needed] (2009, p. 413f)
  14. ^ a b Tabachnick & Fidell (2007, p. 55)
  15. ^ a b Olejnik, S. & Algina, J. 2003. Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs Psychological Methods. 8:(4)434-447. http://cps.nova.edu/marker/olejnik2003.pdf
  16. ^ Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
  17. ^ Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.
  18. ^ Deeks J (1998). "When can odds ratios mislead? Odds ratios should be used only in case-control studies and logistic regression analyses". BMJ 317 (7166): 1155–6. PMID 9784470. 
  19. ^ Medical University of South Carolina. Odds ratio versus relative risk. Accessed on: September 8, 2005.
  20. ^ Zhang, J.; Yu, K. (1998). "What's the relative risk? A method of correcting the odds ratio in cohort studies of common outcomes". JAMA : the journal of the American Medical Association 280 (19): 1690–1691. doi:10.1001/jama.280.19.1690. PMID 9832001.  edit
  21. ^ Greenland, S. (2004). "Model-based Estimation of Relative Risks and Other Epidemiologic Measures in Studies of Common Outcomes and in Case-Control Studies". American Journal of Epidemiology 160 (4): 301–305. doi:10.1093/aje/kwh221. PMID 15286014.  edit
  22. ^ Russell V. Lenth. "Java applets for power and sample size". Division of Mathematical Sciences, the College of Liberal Arts or The University of Iowa. http://www.stat.uiowa.edu/~rlenth/Power/. Retrieved 2008-10-08. 

Further reading

  • Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
  • Bonett, D.G. (2008). Confidence intervals for standardized linear contrasts of means, Psychological Methods, 13, 99-109.
  • Cumming, G. and Finch, S. (2001). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 530–572.
  • Kelley, K. (2007). Confidence intervals for standardized effect sizes: Theory, application, and implementation. Journal of Statistical Software, 20(8), 1-24. [1]
  • Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Sage: Thousand Oaks, CA.

External links

Software

Further Explanations


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • effect size — UK US noun [C or U] ► a measure of the relationship between two variables (= numbers or amounts that can change), as a way of stating how large the effect of one of the variables is: »Employment is the single most effective factor in reducing re… …   Financial and business terms

  • effect size — noun A measure of the strength or magnitude of the effect of an independent variable on a dependent variable in an experiment or a quasi experiment. Syn: treatment effect, ATE …   Wiktionary

  • Size-exclusion chromatography — Equipment for running size exclusion chromatography. The buffer is pumped through the column (right) by a computer controlled device Acronym SEC Classification Chromatography Analytes …   Wikipedia

  • Size of groups, organizations, and communities — Size (the number of people involved) is an important characteristic of the groups, organizations, and communities in which social behavior occurs. When only a few persons are interacting, adding just one more individual may make a big difference… …   Wikipedia

  • Size effect — Size effect. См. Масштабный фактор. (Источник: «Металлы и сплавы. Справочник.» Под редакцией Ю.П. Солнцева; НПО Профессионал , НПО Мир и семья ; Санкт Петербург, 2003 г.) …   Словарь металлургических терминов

  • Effect of psychoactive drugs on animals — Drugs administered to a spider affect its ability to build a web.[1] …   Wikipedia

  • Size of the United States House of Representatives — The size of the United States House of Representatives refers to total number of congressional districts (or seats) into which the land area of the United States proper has been divided. The number of seats is currently set at 435 voting… …   Wikipedia

  • size effect — matmenų reiškinys statusas T sritis fizika atitikmenys: angl. size effect vok. Größeneffekt, m rus. размерностный эффект, m; размерный эффект, m pranc. effet dimensionnel, m …   Fizikos terminų žodynas

  • Effect of Hurricane Katrina on the New Orleans Saints — After Hurricane Katrina devastated the city of New Orleans on August 29, 2005 and caused extensive damage to the Louisiana Superdome, the New Orleans Saints were not able to play any home games there for the entire 2005 NFL season. After… …   Wikipedia

  • Third-person effect — The third person effect hypothesis states that a person exposed to a persuasive communication in the mass media sees it as having a greater effect on others than on himself or herself (Davison, 1983). This is known as the perceptual hypothesis,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”