Rubin Causal Model


Rubin Causal Model

The Rubin Causal Model (RCM) is an approach to the statistical analysis of cause and effect based on the framework of "potential outcomes." RCM is named after its originator, Donald Rubin, Professor of Statistics at Harvard University.

Introduction

The Rubin Causal Model is based in the idea of potential outcomes and the assignment mechanism: every unit has different potential outcomes depending on their "assignment" to a condition. For instance, someone may have one income at age 40 if they attend a private college and a different income at age 40 if they attend a public college. To measure the causal effect of going to a public versus a private college, the investigator should look at the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. A randomized experiment works by assigning people randomly to (in this case) public or private college; because the assignment was random, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups.

The assignment mechanism is the explanation for why some units received the treatment and others the control. In observational data, there is a non-random assignment mechanism: in the case of college attendance, people may choose to attend a private versus a public college based on their financial situation, parents' education, relative ranks of the schools they were admitted to, etc. If all of these factors can be balanced between the two groups of public and private college students, then the effect of the college attendance can be attributed to the college choice.

Many statistical methods have been developed for causal inference, such as propensity score matching and nearest-neighbor matching (which often uses often uses the Mahalanobis metric, so may also be called Mahalanobis matching). These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units. In the example, matching finds graduates of a public college most similar to graduates of a private college, so that like is compared only with like.

Causal inference methods make few assumptions other than that one unit's outcomes are unaffected by another unit's treatment assignment, the stable unit treatment value assumption (SUTVA).

An extended example

Rubin defines a causal effect:

Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from t_1 to t_2 is the difference between what would have happened at time t_2 if the unit had been exposed to E initiated at t_1 and what would have happened at t_2 if the unit had been exposed to C initiated at t_1: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning. [Rubin, Donald. "Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies", Journal of Educational Psychology, Vol. 66, No.5, (1974), pp. 689.]

According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief.

Suppose that Joe is participating in an FDA test for a new hypertension drug. If we are omniscient, we can see the outcomes for Joe under both the treatment (drug) and control (placebo) conditions and know the treatment effect.

Now there are multiple causal effects. One is the causal effect of the drug Joe when Mary receives treatment and is calculated, 10 - 20. Another is the causal effect on Joe when Mary does not receive treatment and is calculated 0 - 5. The third is the causal effect of Mary on Joe, and is calculated 20 - 5. The treatment Mary receives has a greater causal effect for Joe than the assignment of treatment to Joe.

With additional treatments, SUTVA holds. However, if any units other than Joe are dependent on Mary, then we must consider further treatments. The greater the number of dependent units, the more treatments we must consider and the more complex the calculations become (consider an experiment with 20 different causal effects). In order to determine the causal effect using only two treatments, the observations must be independent.

Consider an example where not all subjects benefit from the drug.

Question marks are responses that could not be observed. Some scholars call the impossibility of observing responses to multiple treatments on the same subject over a given period of time the "Fundamental Problem of Causal Inference" [Holland, Paul. "Statistics and Causal Inference, Journal of the American Statistical Associations", Vol. 81, No.396. (Dec., 1986), pp. 947.] . The FPCI makes "observing" causal effects impossible. However, this does not make causal inference impossible. Certain techniques and assumptions allow the FPCI to be overcome.

Suppose that we want to determine the causal effect of the drug on Joe. The FPCI makes it impossible to "observe" the causal effect so we must determine the "average" causal effect instead. To do this, we could instruct Joe to repeat the experiment each month for 6 consecutive months. At the beginning of each month, we would flip a coin to determine which treatment he receives. The results of this experiment follow:

Mary's and Susie's blood pressures increase when they take the drug. We do not know the causal effect of the drug on Susie or Mary because we do not know their responses under control.

If we wanted to infer the unobserved values we could make an assumption of either constant effect or homogeneity, an even stronger assumption than constant effect. If the subjects are all the same or homogeneous, than they would all have the same response to the treatment and the same response to the control. Mathematically, Y_{t1}(u) = Y_{t2}(u) Y_{c1}(u) = Y_{c2}(u) where 1 and 2 are units being tested for homogeneity. As causal effect equals Y_t(u) - Y_c(u), the causal effect would be the same for all of them. The following tables illustrate data that supports assumptions of constant effect, homogeneity, or both:

This is the true average causal effect. Assigning treatments randomly, we calculate another causal effect.

Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.

The perfect doctor

Consider the use of the "perfect doctor" as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:

If matched units are homogeneous, then they have the same causal effect. This means that they have the same average causal effect. Therefore, if all units are perfectly matched, the average causal effect equals the causal effect.

Conclusion

The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.

Relations to other approaches

Pearl (2000) [Pearl, Judea. "Causality: Models, Reasoning, and Inference", Cambridge University Press (2000).] has shown the equivalence between Rubin Causal Model (RCM) and Structural Equation Model (SEM) used in econometrics and the social sciences. The equivalence rests on defining the "potential outcome" variable "Y""x"("u") to be the solution for variable "Y", underthe conditions that (1) the exogeneous variables "U" assume thevalues "u" and (2) the equation that determines the value of "X"is replaced by the constant equation "X = x". With this interpretation, every theorem in RCMis a theorem in SEM and vice versa.This equivalence has led to a complete axiomatization of RCM and acomplete solution to the identification of causal effects,using graphs (Shpitser-Pearl 2006) [Shpitser, Ilya and Pearl, Judea, "Identification of Conditional Interventional Distributions," In R. Dechter and T.S. Richardson (Eds.), "Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence", Corvallis, OR: AUAI Press, 437-444, 2006.] . Moreover, the assumptions that are normally needed for inference in RCM can be read directly from the graphical representation of SEM.

References

*Holland, Paul. "Statistics and Causal Inference", Journal of the American Statistical Associations, Vol. 81, No.396. (Dec., 1986), pp. 945-960.
*Rubin, Donald. "Treatment Group on the Basis of a Covariate", Journal of Educational Statistics, Vol. 2, pp. 1-26.
*Rubin, Donald. "Bayesian Inference for Causal Effects: The Role of Randomization", The Annals of Statistics, Vol. 6, pp. 34-58.
*Rubin, Donald. "Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies", Journal of Educational Psychology, Vol. 66, No.5, (1974), pp. 688-701.

External links

* [http://bayes.cs.ucla.edu/LECTURE/lecture_sec1.htm "The Art and Science of Cause and Effect"] : a slide show and tutorial lecture by Judea Pearl


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Donald Rubin — Donald B. Rubin Born December 22, 1943 (1943 12 22) (age 67) …   Wikipedia

  • Cyclic model — Physical cosmology Universe · Big Bang …   Wikipedia

  • Causality — (but not causation) denotes a necessary relationship between one event (called cause) and another event (called effect) which is the direct consequence (result) of the first. [http://dictionary.reference.com/search?q=Causality x=35 y=25 Random… …   Wikipedia

  • Expectation-maximization algorithm — An expectation maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • List of mathematics articles (R) — NOTOC R R. A. Fisher Lectureship Rabdology Rabin automaton Rabin signature algorithm Rabinovich Fabrikant equations Rabinowitsch trick Racah polynomials Racah W coefficient Racetrack (game) Racks and quandles Radar chart Rademacher complexity… …   Wikipedia

  • Oscar Kempthorne — Born January 31, 1911(1911 01 31) St. Tudy, Cornwall Died November 15, 2000(2000 11 15) (aged …   Wikipedia

  • Ignorability — In statistics, ignorability refers to an experiment design where the method of data collection (and the nature of missing data) do not have a significant influence on the intrepretation of the data. These designs are favorable because the… …   Wikipedia

  • Inflation (cosmology) — Inflation model and Inflation theory redirect here. For a general rise in the price level, see Inflation. For other uses, see Inflation (disambiguation). Physical cosmology …   Wikipedia

  • Chaotic Inflation theory — Physical cosmology Universe · Big Bang …   Wikipedia