Skip to content
Publicly Available Published by De Gruyter May 29, 2013

Linear Models: A Useful “Microscope” for Causal Analysis

  • Judea Pearl EMAIL logo

Abstract

This note reviews basic techniques of linear path analysis and demonstrates, using simple examples, how causal phenomena of non-trivial character can be understood, exemplified and analyzed using diagrams and a few algebraic steps. The techniques allow for swift assessment of how various features of the model impact the phenomenon under investigation. This includes: Simpson’s paradox, case–control bias, selection bias, missing data, collider bias, reverse regression, bias amplification, near instruments, and measurement errors.

1 Introduction

Many concepts and phenomena in causal analysis were first detected, quantified, and exemplified in linear structural equation models (SEM) before they were understood in full generality and applied to nonparametric problems. Linear SEM’s can serve as a “microscope” for causal analysis; they provide simple and visual representation of the causal assumptions in the model and often enable us to derive close-form expressions for quantities of interest which, in turns, can be used to assess how various aspects of the model affect the phenomenon under investigation. Likewise, linear models can be used to test general hypotheses and to generate counter-examples to over-ambitious conjectures.

Despite their ubiquity, however, techniques for using linear models in that capacity have all but disappeared from the main SEM literature, where they have been replaced by matrix algebra on the one hand, and software packages on the other. Very few analysts today are familiar with traditional methods of path tracing [1–4] which, for small problems, can provide both intuitive insight and easy derivations using elementary algebra.

This note attempts to fill this void by introducing the basic techniques of path analysis to modern researchers, and demonstrating, using simple examples, how concepts and issues in modern causal analysis can be understood and analyzed in SEM. These include: Simpson’s paradox, case–control bias, selection bias, collider bias, reverse regression, bias amplification, near instruments, measurement errors, and more.

2 Preliminaries

2.1 Covariance, regression, and correlation

We start with the standard definition of variance and covariance on a pair of variables X and Y. The variance of X is defined as

and measures the degree to which X deviates from its mean .

The covariance of X and Y is defined as

and measures the degree to which X and Y covary.

Associated with the covariance, we define two other measures of association: (1) the regression coefficient and (2) the correlation coefficient . The relationships between the three is given by the following equations:

[1]
[1]
[2]
[2]

We note that is dimensionless and confined to the unit interval; . The regression coefficient, , represents the slope of the least square error line in the prediction of Y given X

2.2 Partial correlations and regressions

Many questions in causal analysis concern the change in a relationship between X and Y conditioned on a given set Z of variables. The easiest way to define this change is through the partial regression coefficient which is given by

In words, is the slope of the regression line of Y on X when we consider only cases for which .

The partial correlation coefficient can be defined by normalizing :

A well-known result in regression analysis [5] permits us to express recursively in terms of pair-wise regression coefficients. When Z is singleton, this reduction reads:

[3]
[3]

Accordingly, we can also express and in terms of pair-wise relationships, which gives:

[4]
[4]
[5]
[5]
[6]
[6]

Note that none of these conditional associations depends on the level z at which we condition variable Z; this is one of the features that makes linear analysis easy to manage and, at the same time, limited in the spectrum of relationships it can capture.

2.3 Path diagrams and structural equation models

A linear structural equation model (SEM) is a system of linear equations among a set V of variables, such that each variable appears on the left hand side of at most one equation. For each equation, the variable on its left hand side is called the dependent variable, and those on the right hand side are called independent or explanatory variables. For example, the equation below

[7]
[7]

declares Y as the dependent variable, X and Z as explanatory variables, and as an “error” or “disturbance” term, representing all factors omitted from V that, together with X and Z determine the value of Y. A structural equation should be interpreted as an assignment process, that is, to determine the value of Y, nature consults the value of variables , and and, based on their linear combination in eq. [7], assigns a value to Y.

This interpretation renders the equality sign in eq. [7] non-symmetrical, since the values of X and Z are not determined by inverting eq. [7] but by other equations, for example,

[8]
[8]
[9]
[9]
Figure 1 Path diagrams capturing the directionality of the assignment process of eqs. [7]–[9] as well as possible correlations among omitted factors.
Figure 1

Path diagrams capturing the directionality of the assignment process of eqs. [7]–[9] as well as possible correlations among omitted factors.

The directionality of this assignment process is captured by a path-diagram, in which the nodes represent variables, and the arrows represent the non-zero coefficients in the equations. The diagram in Figure 1(a) represents the SEM equations of eqs. [7]–[9] and the assumption of zero correlations between the U variables,

The diagram in Figure 1(b) on the other hand represents eqs. [7]–[9] together with the assumption

while remains undetermined.

The coefficients , and are called path coefficients, or structural parameters and they carry causal information. For example, stands for the change in Y induced by raising X one unit, while keeping all other variables constant.1

The assumption of linearity makes this change invariant to the levels at which we keep those other variables constant, including the error variables; a property called “effect homogeneity.”' Since errors (e.g., ) capture variations among individual units (i.e., subjects, samples, or situations), effect homogeneity amounts to claiming that all units react equally to any treatment, which may exclude applications with profoundly heterogeneous subpopulations.

2.4 Wright’s path-tracing rules

In 1921, the geneticist Sewall Wright developed an ingenious method by which the covariance of any two variables can be determined swiftly, by mere inspection of the diagram [Wright’s method consists of equating the (standardized2) covariance between any pair of variables to the sum of products of path coefficients and error covariances along all d-connected paths between X and Y. A path is d-connected if it does not traverse any collider (i.e., head-to-head arrows, as in ).

For example, in Figure 1(a), the standardized covariance is obtained by summing with the product , thus, yielding , while in Figure 1(b) we get . Note that for the pair (), we get since the path is not d-connected.

The method above is valid for standardized variables, namely variables normalized to have zero mean and unit variance. For non-standardized variables the method need to be modified slightly, multiplying the product associated with a path p by the variance of the variable that acts as the “root” for path p. For example, for Figure 1(a) we have , since X serves as the root for path and Z serves as the root for . In Figure 1(b), however, we get where the double arrow serves as its own root.

2.5 Reading partial correlations from path diagrams

The reduction from partial to pair-wise correlations summarized in eqs. [4]–[6], when combined with Wright’s path-tracing rules permits us to extend the latter so as to read partial correlations directly from the diagram. For example, to read the partial regression coefficient , we start with a standardized model where all variances are unity (hence, ), and apply eq. [6] with to get:

[10]
[10]

At this point, each pair-wise covariance can be computed from the diagram through path-tracing and, substituted in eq. [10], yields an expression for the partial regression coefficient .

To witness, the pair-wise covariances for Figure 1(a) are:

[11]
[11]
[12]
[12]
[13]
[13]

Substituting in (10), we get

[14]
[14]
[14]
[14]

Indeed, we know that, for a confounding-free model like Figure 1(a) the direct effect is identifiable and given by the partial regression coefficient . Repeating the same calculation on the model of Figure 1(b) yields:

leaving non-identifiable.

Armed with the ability to read partial regressions, we are now prepared to demonstrate some peculiarities of causal analysis.

3 The microscope at work: examples and their implications

3.1 Simpson’s paradox

Simpson’s paradox describes a phenomenon whereby an association between two variables reverses sign upon conditioning on a third variable, regardless of the value taken by the latter. The history of this paradox and the reasons it evokes surprise and disbelief are described in Chapter 6 of [7]. The conditions under which association reversal appears in linear models can be seen directly in Figure 1(a). Comparing eqs. [12] and [14] we obtain

Thus, if has a different sign from , it is quite possible to have the regression of Y on X, , change sign upon conditioning on for every z. The magnitude of the change depends on the product which measures the extent to which X and Y are confounded in the model.

3.2 Conditioning on intermediaries and their proxies

Conventional wisdom informs us that, in estimating the effect of one variable on another, one should not adjust for any covariate that lies on the pathway between the two [8]. It took decades for epidemiologists to discover that similar prohibition applies to proxies of intermediaries [9]. The amount of bias introduced by such adjustment can be assessed from Figure 2.

Figure 2 Path diagram depicting an intermediate variable (Z) and its proxy (W). Conditioning on W would distort the regression of Y on X.
Figure 2

Path diagram depicting an intermediate variable (Z) and its proxy (W). Conditioning on W would distort the regression of Y on X.

Here, the effect of X on Y is simply as is reflected by the regression slope . If we condition on the intermediary Z, the regression slope vanishes, since the equality renders zero in eq. [10]. If we condition on a proxy W of Z, eq. [10] yields

[15]
[15]

which unveils a bias of size

As expected, the bias disappears for and intensifies for , where conditioning on W amounts to suppressing all variations in Z.

Speaking of suppressing variations, the model in Figure 3

Figure 3 Conditioning on W does not distort the regression of Y on X.
Figure 3

Conditioning on W does not distort the regression of Y on X.

may carry some surprise. Conditioning on W in this model also suppresses variations in Z, especially for high and, yet, it introduces no bias whatsoever; the partial regression slope is [eq. 10]:

[16]
[16]

which is precisely the causal effect of X on Y. It seems as though no matter how tightly we “clamp” Z by controlling W, the causal effect of X on Y remains unaltered. Appendix I explains this counter-intuitive result.

3.3 Case–control bias

In the last section, we explained the bias introduced by conditioning on an intermediate variable (or its proxy) as a restriction on the flow of information between X and Y. This explanation is not entirely satisfactory, as can be seen from the model of Figure 4.

Figure 4 Conditioning on Z, a descendant of Y, biases the regression of Y on X.
Figure 4

Conditioning on Z, a descendant of Y, biases the regression of Y on X.

Here, Z is not on the pathway between X and Y, and one might surmise that no bias would be introduced by conditioning on Z, but analysis dictates otherwise. Path tracing combined with eq. [10] gives:

[17]
[17]
[17]
[17]

and yields the bias

[18]
[18]

This bias reflects what economists called “selection bias” [10] epidemiologists “case–control bias” [11], which occurs when only patients for whom the outcome Y is evidenced (e.g., a complication of a disease) are counted in the database. An intuitive explanation of this bias (invoking virtual colliders) is given in [7, p. 339]. In contrast, conditioning on a proxy of the explanatory variable X, as in Figure 5, introduces no bias, since

Figure 5 Conditioning on Z, a descendant of X, does not bias the regression of Y on X.
Figure 5

Conditioning on Z, a descendant of X, does not bias the regression of Y on X.

[19]
[19]

This can also be deduced from the conditional independence which is implied by the diagram in Figure 5, but not in Figure 4. However, to assess the size of the induced bias, as we did in eq. [18], requires an algebraic analysis of path tracing.

3.4 Sample selection bias

The two examples above are special cases of a more general phenomenon called “selection bias” which occurs when samples are preferentially selected to the data set, depending on the values of some variables in the model [1215]. In Figure 6, for example, if

Figure 6 Conditioning on 
 represents inclusion in the dataset and biases the regression of Y on X, unless 
.
Figure 6

Conditioning on represents inclusion in the dataset and biases the regression of Y on X, unless .

represents the inclusion in the data set, and exclusion, the selection decision is shown to be a function of both X and Y. Since inclusion () amounts to conditioning on Z, we may ask what the regression of Y on X is in the observed data, , compared with the regression in the entire population, .

Applying our path-tracing analysis in eq. [10] we get:

[20]
[20]

We see that a substantial bias may result from conditioning on Z, persisting even when X and Y are not correlated, namely when . Note also that the bias disappears for , as in Figure 5, but not for , which returns us to the case-controlled model of Figure 4.

Selection bias is symptomatic of a general phenomenon associated with conditioning on collider nodes (Z in our example). The phenomenon involves spurious associations induced between two causes upon observing their common effect, since any information refuting one cause should make the other more probable. It has been known as Berkson Paradox [16], “explaining away” [17] or simply “collider bias.”3

3.5 Missing data

In contrast to selection bias, where exclusion removes an entire unit from the dataset, in missing data problems a unit may have each of its variables masked independently of the others [20, p. 89]. Therefore, the diagram representing the missingness process should assign each variable a “switch” , called “missingness mechanism” which determines whether is observed or masked . The arrows pointing to tells us which variables determine whether fires or not . In Figure 7(a), for example, the missingness

Figure 7 Missingness diagrams in which conditioning on 
 or 
 represents unmasking the values of X and Y, respectively. The parameter 
 and 
 can all be estimated bias-free from data generated by either model, through each model requires a different estimation procedure.
Figure 7

Missingness diagrams in which conditioning on or represents unmasking the values of X and Y, respectively. The parameter and can all be estimated bias-free from data generated by either model, through each model requires a different estimation procedure.

of X, denoted , depends only on the latent variable L, while the missingness of Y is shown to depend on both L and X.

Assume we wish to estimate the covariance from partially observed data generated by the model of Figure 7(a); can we obtain an unbiased estimate of ? The question boils down to expressing in terms of the information available to us, namely the values of X and Y that are revealed to us whenever or (or both). If we simply estimate from samples in which both X and Y are observed, that would amount to conditioning on both and which would introduce a bias since the pair is not independent of the pair (owed to the unblocked path from Y to ).

The graph reveals, however, that can nevertheless be estimated bias-free from the information available, using two steps. First, we note that X is independent of its missingness mechanism , since the path from X to is blocked (by the collider at ). Therefore, .4 This means that we can estimate from the samples in which X is observed, regardless of whether Y is missing. Next, we note that the regression slope can be estimated (e.g., using OLS) from samples in which both X and Y are observed. This is because conditioning on and is similar to conditioning on Z in Figure 5, where Z is a proxy of the explanatory variable X.

Putting the two together (using eq. [2]) we can write:

which guarantees that the product of the two estimates on the right hand side would result in an unbiased estimate of . Note that a similar analysis of Figure 7(b) would yield

which instructs us to estimate using samples in which Y is observed and estimate the regression of X on Y from samples in which both X and Y are observed. Remarkably, the two models are statistically indistinguishable and yet each dictates a different estimation procedure, thus demonstrating that no model-blind estimator can guarantee to deliver an unbiased estimate, even when such exists. If the path diagram permits no decomposition of into terms conditioned on and (as would be the case, for example, if an arrow existed from X to in Figure 7(a)) we would conclude then that is not estimable by any method whatsoever. A general analysis of missing data problems using causal graphs is given in Mohan et al. [21].

3.6 The M-bias

The M-bias is another instant of Berkson’s paradox where the conditioning variable, Z, is a pre-treatment covariate, as depicted in Figure 8.

Figure 8 Adjusting for Z, which may be either pre-treatment or post-treatment covariate, introduces bias where none exists. The better the predictor the higher the bias.
Figure 8

Adjusting for Z, which may be either pre-treatment or post-treatment covariate, introduces bias where none exists. The better the predictor the higher the bias.

The parameters and represent error covariances and , respectively, which can be generated, for example, by latent variables effecting each of these pairs.

To analyze the size of this bias, we apply eq. [10] and get:

[21]
[21]

Thus, the bias induced increases substantially when approaches one, that is, when Z becomes a good predictor of X. Ironically, this is precisely when investigators have all the textbook reasons to adjust for Z. Being pre-treatment, the collider Z cannot be distinguished from a confounder (as in Figure 1(a)) by any statistical means, and has alluded some statisticians to conclude that “there is no reason to avoid adjustment for a variable describing subjects before treatment” [22, p. 76].

3.7 Reverse regression

Is it possible that men would earn a higher salary than equally qualified women, and simultaneously, men are more qualified than women doing equally paying job? This counter-intuitive condition can indeed exist, and has given rise to a controversy called “Reverse Regression;”' some sociologists argued that, in salary discrimination cases, we should not compare salaries of equally qualified men and women, but, rather, compare qualifications of equally paid men and women [23]. The phenomenon can be demonstrated in Figure 9.

Figure 9 Path diagram in which Z acts as a mediator between X and Y, demonstrating negative reverse regression 
 for positive 
 and 
.
Figure 9

Path diagram in which Z acts as a mediator between X and Y, demonstrating negative reverse regression for positive and .

Let X stand for gender (or age, or socioeconomic background), Y for job earnings and Z for qualification. The partial regression encodes the differential earning of males over females having the same qualifications , while encodes the differential qualification of males over females earning the same salary .

For the model in Figure 9, we have

Surely, for any and we can choose so as to make negative. For example, the combination and yields

Thus, there is no contradiction in finding men earning a higher salary than equally qualified women, and simultaneously, men being more qualified than women doing equally paying job. A negative may be a natural consequence of male-favoring hiring policy (), male-favoring training policy () and qualification-dependent earnings ().

The question of whether standard or reverse regression is more appropriate for proving discrimination is also clear. The equality leaves no room for hesitation, because coincides with the counterfactual definition of “direct effect of gender on hiring had qualification been the same,” which is the court’s definition of discrimination.

The reason the reverse regression appeals to intuition is because it reflects a model in which the employer decides on the qualification needed for a job on the basis of both its salary level and the applicant sex. If this were a plausible model, it would indeed be appropriate to persecute an employer who demands higher qualifications from men as opposed to women. But such a model should place Z as a post-salary variable, for example, .

3.8 Bias amplification

In the model of Figure 10, Z acts as an instrumental variable, since . If U is unobserved, however, Z cannot be distinguished from a confounder, as in Figure 1(a), in the sense that for every set of parameters () in Figure 1(a) one can find a set () for the model in Figure 10 such that the observed covariance matrices of the two models are the same. This indistinguishability, together with the fact that Z may be a strong predictor of X may lure investigators to condition on Z to obtain an unbiased estimate of d [24]. Recent work has shown, however, that such adjustment would amplify the bias created by U [2527]. The magnitude of this bias and its relation to the pre-conditioning bias, ab, can be computed from the diagram of Figure 10, as follows:

Figure 10 Bias amplification, 
, produced by conditioning on an instrumental variable (Z).
Figure 10

Bias amplification, , produced by conditioning on an instrumental variable (Z).

[22]
[22]

We see the bias created, , is proportional to the pre-existing bias ab and increases with c; the better Z predicts X, the higher the bias. An intuitive explanation of this phenomenon is given in Pearl [26]

3.9 Near instruments – amplifiers or attenuators?

Figure 11 A diagram where Z acts both as an instrument and as a confounder.
Figure 11

A diagram where Z acts both as an instrument and as a confounder.

The model in Figure 11 is indistinguishable from that of Figure 10 when U is unobserved. However, here Z acts both as an instrument and as a confounder. Conditioning on Z is beneficial in blocking the confounding path and harmful in amplifying the baseline bias . The trade-off between these two tendencies can be quantified by computing , yielding

[23]
[23]

We see that the baseline bias is first reduced to ab and then magnified by the factor . For Z to be a bias-reducer, its effect on Y (i.e., d) must exceed its effect on X (i.e., c) by a factor . This trade-off was assessed by simulations in Myers et al. [28] and analytically in Pearl [29], including an analysis of multi-confounders, and nonlinear models.

3.10 The butterfly

Another model in which conditioning on Z may have both harmful and beneficial effects is seen in Figure 12.

Figure 12 Adjusting for Z may be harmful or beneficial depending on the model’s parameters.
Figure 12

Adjusting for Z may be harmful or beneficial depending on the model’s parameters.

Here, Z is both a collider and a confounder. Conditioning on Z blocks the confounding path through and and at the same time induces a virtual confounding path through the latent variables that create the covariances and .

This trade-off can be evaluated from our path-tracing formula eq. [10] which yields

[24]
[24]

We first note that the pre-conditioning bias

[25]
[25]

may have positive or negative values even when both and . This refutes folklore wisdom, according to which a variable Z can be exonerated from confounding considerations if it is uncorrelated with both treatment (X) and outcome (Y).

Second, we notice that conditioning on Z may either increase or decrease bias, depending on the structural parameters. This can be seen by comparing eq. [25] with the post-conditioning bias:

[26]
[26]

In particular, since eq. [26] is independent on , it is easy to choose values of that make eq. [25] either higher of lower than eq. [26].

3.11 Measurement error

Figure 13 Conditioning on Z, a proxy for the unobserved confounder U, does not remove the bias ().
Figure 13

Conditioning on Z, a proxy for the unobserved confounder U, does not remove the bias ().

Assume the confounder U in Figure 13(a) is unobserved but we can measure a proxy Z of U. Can we assess the amount of bias introduced by adjusting for Z instead of U? The answer, again, can be extracted from our path-tracing formula, which yields

[27]
[27]
[27]
[27]

As expected, the bias vanishes when approaches unity, indicating a faithful proxy. Moreover, if can be estimated from an external pilot study, the causal effect can be identified [See 30, 31] Remarkably, identical behavior emerges in the model of Figure 13(b) in which Z is a driver of U, rather than a proxy.

The same treatment can be applied to errors in measurements of X or of Y and, in each case, the formula of reveals what model parameters are the ones affecting the resulting bias.

4 Conclusions

We have demonstrated how path-analytic techniques can illuminate the emergence of several phenomena in causal analysis and how these phenomena depend on the structural features of the model. Although the techniques are limited to linear analysis, hence, restricted to homogeneous populations with no interactions, they can be superior to simulation studies whenever conceptual understanding is of essence, and problem size is manageable.

Acknowledgment

This research was supported in parts by grants from NSF #IIS-1249822 and ONR #N00014–13–1-0153.

Appendix

In linear systems, the explanation for the equality in Figure 3 is simple. Conditioning on W does not physically constrain Z, it merely limits the variance of Z in the subpopulation satisfying which was chosen for observations. Given that effect-homogeneity prevails of linear models, we know that the effect of X on Z remains invariant to the level w chosen for observation and therefore this w-specific effect reflects the effect of X on the entire population. This dictates (in a confounding-free model) .

But how can we explain the persistence of this phenomenon in nonparametric models, where we know (e.g., using do-calculus [7]) that adjustment for W does not have any effect on the resulting estimand? In other words, the equality

will hold in the model of Figure 3 even when the structural equations are nonlinear. Indeed, the independence of W and X, implies

The answer is that adjustment for W involves averaging over W; conditioning on W does not. In other words, whereas the effect of X on Z may vary across strata of W, the average of this effect is none other but the effect over the entire population, that is, , which equals in the non-confounding case.

Symbolically, we have

The first reduction is licensed by the fact that X has no effect on W and the second by the back-door condition.

References

1. Wright S. Correlation and causation. J Agric Res 1921;20:557–85.Search in Google Scholar

2. Duncan O. Introduction to structural equation models. New York: Academic Press, 1975.Search in Google Scholar

3. Kenny D. Correlation and Causality. New York: Wiley, 1979.Search in Google Scholar

4. Heise D. Causal analysis. New York: John Wiley and Sons, 1975.Search in Google Scholar

5. Crámer H. Mathematical methods of statistics. Princeton, NJ: Princeton University Press, 1946.Search in Google Scholar

6. Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669–710.10.1093/biomet/82.4.669Search in Google Scholar

7. Pearl J. Causality: models, reasoning, and inference, 2nd ed. New York: Cambridge University Press, 2009.10.1017/CBO9780511803161Search in Google Scholar

8. Cox D. The planning of experiments. New York: John Wiley and Sons, 1958.Search in Google Scholar

9. Weinberg C. Toward a clearer definition of confounding. Am J Epidemiol 1993;137:1–8.10.1093/oxfordjournals.aje.a116591Search in Google Scholar PubMed

10. Heckman JJ. Sample selection bias as a specification error. Econometrica 1979;47:153–61.10.2307/1912352Search in Google Scholar

11. Robins J. Data, design, and background knowledge in etiologic inference. Epidemiology 2001;12:313–20.10.1097/00001648-200105000-00011Search in Google Scholar PubMed

12. Bareinboim E, Pearl J. Controlling selection bias in causal inference. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS). La Palma, Canary Islands, 2012;100–8.Search in Google Scholar

13. Daniel RM, Kenward MG, Cousens SN, Stavola BLD. Using causal diagrams to guide analysis in missing data problems. Stat Methods Med Res 2011;21:243–56.10.1177/0962280210394469Search in Google Scholar PubMed

14. Geneletti S, Richardson S, Best N. Adjusting for selection bias in retrospective, case-control studies. Biostatistics 2009;10:17–31.10.1093/biostatistics/kxn010Search in Google Scholar PubMed

15. Pearl J. A solution to a class of selection-bias problems. Technical Report, R-405, http://ftp.cs.ucla.edu/pub/stat_ser/r405.pdf, Department of Computer Science, University of California, Los Angeles, CA, 2012.Search in Google Scholar

16. Berkson J. Limitations of the application of fourfold table analysis to hospital data. Biometrics Bull 1946;2:47–53.10.2307/3002000Search in Google Scholar

17. Kim J, Pearl J. A computational model for combined causal and diagnostic reasoning in inference systems. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). Karlsruhe, Germany, 1983.Search in Google Scholar

18. Little RJ, Rubin DB. Statistical analysis with missing data, Vol. 4. New York: Wiley, 1987.Search in Google Scholar

19. Pearl J. Myth, confusion, and science in causal analysis. Technical Report, R-348, University of California, Los Angeles, CA. http://ftp.cs.ucla.edu/pub/stat_ser/r348.pdf, 2009.Search in Google Scholar

20. Rubin D. Author’s reply: should observational studies be designed to allow lack of balance in covariate distributions across treatment group? Stat Med 2009;28:1420–23.10.1002/sim.3565Search in Google Scholar

21. Mohan K, Pearl J, Tian J. Missing data as a causal inference problem. Technical Report R-410, http://ftp.cs.ucla.edu/pub/stat_ser/r410.pdf, University of California Los Angeles, Computer Science Department, Los Angeles, CA, 2013.Search in Google Scholar

22. Rosenbaum P. Observational studies, 2nd ed. New York: Springer-Verlag, 2002.10.1007/978-1-4757-3692-2Search in Google Scholar

23. Goldberger A. Reverse regression and salary discrimination. J Hum Resour 1984;19:293–318.10.2307/145875Search in Google Scholar

24. Hirano K, Imbens G. Estimation of causal effects using propensity score weighting: an application to data on right heart catheterization. Health Serv Outcomes Res Methodol 2001;2:259–78.10.1023/A:1020371312283Search in Google Scholar

25. Bhattacharya J, Vogt W. Do instrumental variables belong in propensity scores? Tech. Rep. NBER Technical Working Paper 343, National Bureau of Economic Research, MA, 2007.10.3386/t0343Search in Google Scholar

26. Pearl J. On a class of bias-amplifying variables that endanger effect estimates. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. AUAI, Corvallis, OR, 417–24. http://ftp.cs.ucla.edu/pub/stat_ser/r356.pdf, 2010.Search in Google Scholar

27. Wooldridge J. Should instrumental variables be used as matching variables? Technical Report, https://www.msu.edu/ec/faculty/wooldridge/current%20research/treat1r6.pdf, Michigan State University, MI, 2009.Search in Google Scholar

28. Myers JA, Rassen JA, Gagne JJ, Huybrechts KF, Schneeweiss S, Rothman KJ, Joffe MM, Glynn RJ. Effects of adjusting for instrumental variables on bias and precision of effect estimates. Am J Epidemiol 2011;174:1213–22.10.1093/aje/kwr364Search in Google Scholar PubMed PubMed Central

29. Pearl J. Invited commentary: understanding bias amplification. Am J Epidemiol 2011 [online]. DOI: 10.1093/aje/kwr352.10.1093/aje/kwr352Search in Google Scholar PubMed PubMed Central

30. Pearl J. On measurement bias in causal inferences. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. AUAI, Corvallis, OR, 425–32. http://ftp.cs.ucla.edu/pub/stat_ser/r357.pdf, 2010.Search in Google Scholar

31. Kuroki M, Pearl J. Measurement bias and effect restoration in causal inference. Technical Report, R-366, http://ftp.cs.ucla.edu/pub/stat_ser/r366.pdf, University of California Los Angeles, Computer Science Department, Los Angeles, CA, 2013.Search in Google Scholar

  1. 1

    Readers familiar with do-calculus [6] can interpret as the experimental slope while those familiar with counterfactual logic can write . The latter implies the former, and the two coincide in linear models, where causal effects are homogeneous (i.e., unit-independent.)

  2. 2

    Standardized parameters refer to systems in which (without loss of generality) all variables are normalized to have zero mean and unit variance, which significantly simplifies the algebra.

  3. 3

    It has come to my attention recently, and I feel responsibility to make it public, that seasoned reviewers for highly reputable journals reject papers because they are not convinced that such bias can be created; it defies, so they claim, everything they have learned from statistics and economics. A typical resistance to accepting Berkson’s Paradox is articulated in [18, 19].

  4. 4

    stands for the conditional variance of X given . We take the liberty of treating as any other variable in the linear system, even though it is binary, hence the relationship must be nonlinear. The linear context simplifies the intuition and the results hold in nonparametric systems as well.

Published Online: 2013-5-29

©2013 by Walter de Gruyter Berlin / Boston

Downloaded on 16.5.2024 from https://www.degruyter.com/document/doi/10.1515/jci-2013-0003/html
Scroll to top button