## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Recursive partitioning for heterogeneous causal effects

Edited by Richard M. Shiffrin, Indiana University, Bloomington, IN, and approved May 20, 2016 (received for review June 25, 2015)

## Abstract

In this paper we propose methods for estimating heterogeneity in causal effects in experimental and observational studies and for conducting hypothesis tests about the magnitude of differences in treatment effects across subsets of the population. We provide a data-driven approach to partition the data into subpopulations that differ in the magnitude of their treatment effects. The approach enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without “sparsity” assumptions. We propose an “honest” approach to estimation, whereby one sample is used to construct the partition and another to estimate treatment effects for each subpopulation. Our approach builds on regression tree methods, modified to optimize for goodness of fit in treatment effects and to account for honest estimation. Our model selection criterion anticipates that bias will be eliminated by honest estimation and also accounts for the effect of making additional splits on the variance of treatment effect estimates within each subpopulation. We address the challenge that the “ground truth” for a causal effect is not observed for any individual unit, so that standard approaches to cross-validation must be modified. Through a simulation study, we show that for our preferred method honest estimation results in nominal coverage for 90% confidence intervals, whereas coverage ranges between 74% and 84% for nonhonest approaches. Honest estimation requires estimating the model with a smaller sample size; the cost in terms of mean squared error of treatment effects for our preferred method ranges between 7–22%.

- heterogeneous treatment effects
- causal inference
- cross-validation
- supervised machine learning
- potential outcomes

In this paper we study two closely related problems: first, estimating heterogeneity by covariates or features in causal effects in experimental or observational studies, and second, conducting inference about the magnitude of the differences in treatment effects across subsets of the population. Causal effects, in the Rubin causal model or potential outcome framework we use here (1⇓–3), are comparisons between outcomes we observe and counterfactual outcomes we would have observed under a different regime or treatment. We introduce data-driven methods that select subpopulations to estimate treatment effect heterogeneity and to test hypotheses about the differences between the effects in different subpopulations. For experiments, our method allows researchers to identify heterogeneity in treatment effects that was not specified in a preanalysis plan, without concern about invalidating inference due to searching over many possible partitions.

Our approach is tailored for applications where there may be many attributes of a unit relative to the number of units observed, and where the functional form of the relationship between treatment effects and the attributes of units is not known. The supervised machine learning literature (e.g., ref. 4) has developed a variety of effective methods for a closely related problem, the problem of predicting outcomes as a function of covariates in similar environments. The most popular approaches [e.g., regression trees (5), random forests (6), LASSO (7), support vector machines (8), etc.] entail building a model of the relationship between attributes and outcomes, with a penalty parameter that penalizes model complexity. Cross-validation is often used to select the optimal level of complexity (the one that maximizes predictive power without “overfitting”).

Within the prediction-based machine learning literature, regression trees differ from most other methods in that they produce a partition of the population according to covariates, whereby all units in a partition receive the same prediction. In this paper, we focus on the analogous goal of deriving a partition of the population according to treatment effect heterogeneity, building on standard regression trees (5, 6). Whether the ultimate goal in an application is to derive a partition or fully personalized treatment effect estimates depends on the setting; settings where partitions may be desirable include those where decision rules must be remembered, applied, or interpreted by human beings or computers with limited processing power or memory. Examples include treatment guidelines to be used by physicians or even online personalization applications where having a simple lookup table reduces latency for the user. We show that an attractive feature of focusing on partitions is that we can achieve nominal coverage of confidence intervals for estimated treatment effects even in settings with a modest number of observations and many covariates. Our approach has applicability even for settings such as clinical trials of drugs with only a few hundred patients, where the number of patient characteristics is potentially quite large. Our method may also be viewed as a complement to the use of “preanalysis plans” where the researcher must commit in advance to the subgroups that will be considered. It enables researchers to let the data discover relevant subgroups while preserving the validity of confidence intervals constructed on treatment effects within subgroups.

A first challenge for our goal of finding a partition and then testing hypotheses about treatment effects is that many existing machine learning methods cannot be used directly for constructing confidence intervals. This is because the methods are “adaptive”: They use the training data for model selection, so that spurious correlations between covariates and outcomes affect the selected model, leading to biases that disappear only slowly as the sample size grows. In some contexts, additional assumptions such as “sparsity” (only a few covariates affect the outcomes) can be applied to guarantee consistency or asymptotic (large sample) normality of predictions (9). In this paper, we use an alternative approach that places no restrictions on model complexity, which we refer to as “honesty.” We say that a model is “honest” if it does not use the same information for selecting the model structure (in our case, the partition of the covariate space) as for estimation given a model structure. We accomplish this by splitting the training sample into two parts, one for constructing the tree (including the cross-validation step) and a second for estimating treatment effects within leaves of the tree. Honesty has the implication that the asymptotic properties of treatment effect estimates within the partitions are the same as if the partition had been exogenously given. Although there is a loss of precision due to sample splitting (which reduces sample size in each step of estimation), there is a benefit in terms of eliminating bias that offsets at least part of the cost.

A key contribution of this paper is to show that criteria for both constructing the partition and cross-validation change when we anticipate honest estimation. In the first stage of estimation, the criterion is the expectation of the mean squared error (MSE) when treatment effects are reestimated in the second stage. Crucially, we anticipate that second-stage estimates of treatment effects will be unbiased in each leaf, because they will be performed on an independent sample. In that case, splitting and cross-validation criteria are adjusted to ignore systematic bias in estimation and focus instead on the tradeoff between more tailored prediction (smaller leaf size) and the variance that will arise in the second (honest estimation) stage due to noisy estimation within small leaves.

A second and perhaps more fundamental challenge to applying machine learning methods such as regression trees (5) off-the-shelf to the problem of causal inference is that regularization approaches based on cross-validation typically rely on observing the “ground truth,” that is, actual outcomes in a cross-validation sample. However, if our goal is to minimize the MSE of treatment effects, we encounter what Holland (2) calls the “fundamental problem of causal inference”: The causal effect is not observed for any individual unit, and so we do not directly have a ground truth. We address this by proposing approaches for constructing unbiased estimates of the MSE of the causal effect of the treatment.

Using theoretical arguments and a simulation exercise, we compare our approach with previously proposed ones. Relative to approaches that focus on goodness of fit in model selection, our approach yields substantial improvements in the MSE of treatment effects (ranging from 43% to 210%). We also examine the costs and benefits of honest estimation relative to adaptive estimation. In the settings we consider, honest estimation leads to approximately nominal coverage of confidence intervals across estimation methods and settings, whereas for adaptive estimation approaches coverage can be as low as 69%. The cost of honest estimation in terms of MSE of treatment effects (where for adaptive estimation, we have a larger sample size available for training) ranges from 7% to 22% for our preferred model.

## The Problem

### Setup.

We consider a setup where there are *N* units, indexed by *i* received the control treatment and *i* received the active treatment. The realized outcome for unit *i* is the potential outcome corresponding to the treatment received:*K*-component vector of features, covariates, or pretreatment variables, known not to be affected by the treatment. Our data consist of the triple *x*.

### Unconfoundedness.

Throughout the paper, we maintain the assumption of randomization conditional on the covariates, or “unconfoundedness” (11), formalized as given below.

### Assumption 1 (Unconfoundedness).

*using the symbol* *to denote (conditional) independence of two random variables*. *This assumption is satisfied in a randomized experiment without conditioning on covariates but also may be justified in observational studies if the researcher is able to observe all of the variables that affect the unit’s receipt of treatment and are associated with the potential outcomes*.

To simplify exposition, in the main body of the paper we maintain the stronger assumption of complete randomization*,* whereby

### Conditional Average Treatment Effects and Partitioning.

Define the conditional average treatment effect

A large part of the causal inference literature (e.g., refs. 3 and 12⇓–14) is focused on estimating the population (marginal) average treatment effect

## Honest Inference for Population Averages

Our approach departs from conventional classification and regression trees (CART) in two fundamental ways. First, we focus on estimating conditional average treatment effects rather than predicting outcomes. Conventional regression tree methods are therefore not directly applicable because we do not observe unit-level causal effects for any unit. Second, we impose a separation between constructing the partition and estimating effects within leaves of the partition, using separate samples for the two tasks, in what we refer to as honest estimation. We contrast honest estimation with adaptive estimation used in conventional CART, where the same data are used to build the partition and estimate leaf effects. In this section we introduce the changes induced by honest estimation in the context of the conventional prediction setting; in the next section we consider causal effects. In the discussion in this section we observe for each unit *i* a pair of variables

### Setup.

We begin by defining key concepts and functions. First, a tree or partitioning *c*:

Given a partition

### The Honest Target.

A central concern in this paper is the criterion used to compare alternative estimators; following much of the literature, we focus on MSE criteria, but we will modify these criteria in a variety of ways.

For the prediction case, we adjust the MSE by

Our ultimate goal is to construct and assess algorithms

### The Adaptive Target.

In the conventional CART approach the target is slightly different:

We refer to the conventional CART approach as adaptive and to our approach as honest. In practice there will be costs and benefits of the honest approach relative to the adaptive approach. The cost is sample size; given a dataset, putting some data in the estimation sample leaves fewer units for the training dataset, leading to higher expected MSE. The advantage of honest estimation is that it avoids a problem of adaptive estimation, which is that spurious extreme values of

### The Implementation of CART.

There are two distinct parts of the conventional CART algorithm, initial tree building and cross-validation to select a complexity parameter used for pruning. Each part of the algorithm relies on a criterion function based on MSE. In this paper we will take as given the overall structure of the CART algorithm (e.g., refs. 4 and 5), and our focus will be on modifying the criteria.

In the tree-building phase, CART recursively partitions the observations of the training sample. For each leaf, the algorithm evaluates all candidate splits of that leaf (which induce alternative partitions

It is well understood that the conventional criterion leads to overfitting, a problem that is solved by cross-validation to select a penalty on tree depth. The in-sample goodness-of-fit criterion will always improve with additional splits, even though additional refinements of a partition

### Honest Splitting.

In our honest estimation algorithm, we modify CART in two ways. First, we use an independent sample

To begin developing our criteria, let us expand

We wish to estimate

To estimate the average of the squared outcome

Comparing this to the criterion used in the conventional CART algorithm, which can be written as*x*,

### Honest Cross-Validation.

Even though

Because the conventional CART cross-validation criterion does not account for honest estimation we consider the analog of our unbiased estimate of the criterion, which accounts for honest estimation by evaluating a partition

This estimator for the honest criterion is unbiased for fixed

## Honest Inference for Treatment Effects

In this section we change the focus to estimating conditional average treatment effects instead of estimating conditional population means. We refer to the estimators developed in this section as “causal tree” (CT) estimators.

The setting with treatment effects creates some specific problems because we do not observe the value of the treatment effect whose conditional mean we wish to estimate. This complicates the calculation of the criteria we introduced in the previous section. However, a key point of this paper is that we can estimate these criteria and use those estimates for splitting and cross-validation.

We now observe in each sample the triple *x* and both treatment levels *w* the population average outcome

A key challenge is that, in contrast to

### Modifying Conventional CART for Treatment Effects.

Consider first modifying conventional (adaptive) CART to estimate heterogeneous treatment effects. Note that in the prediction case, using the fact that

### Modifying the Honest Approach.

The honest approach described in the previous section for prediction problems also needs to be modified for the treatment effect setting. Using the same expansion as before, now applied to the treatment effect setting, we find

These expressions are directly analogous to the criteria we proposed for the honest version of CART in the prediction case. The criteria reward a partition for finding strong heterogeneity in treatment effects and penalize a partition that creates variance in leaf estimates. One difference is that in the prediction case the two terms both tend to select features that predict heterogeneity in outcomes, whereas for the treatment effect case the two terms reward different types of features. It is possible to reduce the variance of a treatment effect estimator by introducing a split, even if both child leaves have the same average treatment effect, if a covariate affects the mean outcome but not treatment effects. In such a case, the split results in more homogeneous leaves, and thus lower-variance estimates of the means of the treatment group and control group outcomes. Thus, the distinction between adaptive and honest splitting criterion will be more pronounced for treatment effect estimation. As in the prediction case, the cross-validation criterion estimates treatment effects within leaves using the

## Four Partitioning Estimators for Causal Effects

In this section we briefly summarize our CT estimator and then describe three alternative types of estimators. We compare CT to the alternatives theoretically and through simulations. For each of the four types there is an adaptive version and an honest version, where the latter takes into account that estimation will be done on a sample separate from the sample used for constructing the partition, leading to a total of eight estimators. Note that further variations are possible; one could use adaptive splitting and cross-validation methods to construct a tree but still perform honest estimation on a separate sample. We do not consider such variations.

### CTs.

The discussion above developed our preferred estimator, CTs. To summarize, for the adaptive version of CTs, denoted CT-A, we use for splitting the objective

### Transformed Outcome Trees.

Our first alternative method is based on the insight that by using a transformed version of the outcome *p*. Because this method is primarily considered as a benchmark, in simulations we focus only on an adaptive version that can use existing learning methods entirely off-the-shelf. The adaptive version of the transformed outcome tree (TOT) estimator we consider, TOT-A, uses the conventional CART algorithm with the transformed outcome replacing the original outcome. The honest version, TOT-H, uses the same splitting and cross-validation criteria, so that it builds the same trees; it differs only in that a separate estimation sample is used to construct the leaf estimates. The treatment effect estimator within a leaf is the same as the adaptive method, that is, the sample mean of

### Fit-Based Trees.

We consider two additional alternative methods for constructing trees, based on suggestions in the literature. In the first of these alternatives the choice of which feature to split on, and at what value of the feature to split, is based on comparisons of the goodness of fit (F) of the outcome rather than the treatment effect. In standard CART of course goodness of fit of outcomes is also the split criterion, but here we estimate a model for treatment effects within each leaf. Specifically, we have a linear model with an intercept and an indicator for the treatment as the regressors, rather than only an intercept as in standard CART. This approach is used in Zeileis et al. (19), who consider building general models at the leaves of the trees. Treatment effect estimation is a special case of their framework. Zeileis et al. (19) propose using statistical tests based on improvements in goodness of fit to determine when to stop growing the tree, rather than relying on cross-validation, but for ease of comparison with CART, in this paper we will stay closer to traditional CART in terms of growing deep trees and pruning them. We modify the MSE function:

### Squared T-Statistic Trees.

For the last estimator we look for splits with the largest value for the square of the t-statistic (TS) for testing the null hypothesis that the average treatment effect is the same in the two potential leaves. This estimator was proposed by Su et al. (20). If the two leaves are denoted *L* (Left) and *R* (Right), the square of the t-statistic is

### Comparison of the CTs, the F Criterion, and the TS Criterion.

It is useful to compare our proposed criterion to the F and TS criteria in a simple setting to gain insight into the relative merits of the three approaches. We do so here focusing on a decision whether to proceed with a single possible split, based on a binary covariate

## Inference

Given the estimated conditional average treatment effect we also would like to do inference. Once constructed, the tree is a function of covariates, and if we use a distinct sample to conduct inference, then the problem reduces to that of estimating treatment effects in each member of a partition of the covariate space. For this problem, standard approaches are therefore valid for the estimates obtained via honest estimation and, in particular, no assumptions about model complexity are required. As our simulations below illustrate, for the adaptive methods standard approaches to confidence intervals are not generally valid for the reasons discussed above.

## A Simulation Study

To assess the relative performance of the proposed algorithms we carried out a small simulation study with three distinct designs. In Table 1 we report a number of summary statistics from the simulations. We report averages; results for medians are similar. We report results for

In all designs the marginal treatment probability is *K* denotes the number of features, we have a model *κ*) and mean outcomes (*η*), some covariates that enter *η* but not *κ*; and some covariates that do not affect outcomes at all (“noise” covariates). Design 1 does not have noise covariates. In designs 2 and 3, the first few covariates enter *κ*, but only when their signs are positive, whereas they affect *η* throughout their range. Different criterion will thus lead to different optimal splits, even within a covariate; F will focus more on splits when the covariates are negative.

The first section of Table 1 compares the number of leaves in different designs and different values of

The second section of Table 1 examines the performance of the alternative honest estimators, as evaluated by the infeasible criterion *κ*. F-H would perform better in alternative designs where *η* and *κ* the same way, so that the CT-H criterion is aligned with TS-H. Designs 2 and 3 are more complex, and the ideal splits from the perspective of balancing overall MSE of treatment effects (including variance reduction) are different from those favored by TS-H. Thus, TS performs worse, and the difference is exacerbated with larger sample size in design 3, where there are more opportunities for the estimators to build deeper trees and thus to make different choices. We also calculate comparisons based on a feasible criterion, the average squared difference between the transformed outcome *SI Appendix*. The results are consistent with those from the infeasible criterion, but the feasible criterion compresses the performance differences.

The third section of Table 1 explores the costs and benefits to honest estimation. The table reports the ratio of

The final two sections of Table 1 show the coverage rate for 90% confidence intervals. We achieve nominal coverage rates for honest methods in all designs, where, in contrast, the adaptive methods have coverage rates substantially below nominal rates. The fit estimator has the highest adaptive coverage rates; it does not focus on treatment effects and thus is less prone to overstating that heterogeneity through adaptive estimation. Thus, our simulations bear out the tradeoff that honest estimation sacrifices some goodness of fit (of treatment effects) in exchange for valid confidence intervals.

## Observational Studies with Unconfoundedness

The discussion so far has focused on the setting where the assignment to treatment is randomized. The proposed methods can be adapted to observational studies under the assumption of unconfoundedness. In that case we need to modify the estimates within leaves to remove the bias from simple comparisons of treated and control units. There is a large literature on methods for doing so (e.g., ref. 3). For example, as in ref. 21 we can do so by propensity score weighting. Efficiency will improve if we renormalize the weights within each leaf and within the treatment and control group when estimating treatment effects. Crump et al. (22) propose approaches to trimming observations with extreme values for the propensity score to improve robustnesses. Note that there are some additional conditions required to establish asymptotic normality of treatment effect estimates when propensity score weighting is used (see, e.g., ref. 21); these results apply without modification to the estimation phase of honest partitioning algorithms.

## The Literature

A small but growing literature seeks to apply supervised machine learning techniques to the problem of estimating heterogeneous treatment effects. Beyond those previously discussed, Tian et al. (23) transform the features rather than the outcomes and then apply LASSO to the model with the original outcome and the transformed features. Foster et al. (24) estimate

## Conclusion

In this paper we introduce methods for constructing trees for causal effects that allow us to do valid inference for the causal effects in randomized experiments and in observational studies satisfying unconfoundedness. These methods provide valid confidence intervals without restrictions on the number of covariates or the complexity of the data-generating process. Our methods partition the feature space into subspaces. The output of our method is a set of treatment effects and confidence intervals for each subspace.

A potentially important application of the techniques is to “data mining” in randomized experiments. Our method can be used to explore any previously conducted randomized controlled trial, for example, medical studies or field experiments in development economics. Our methods can discover subpopulations with lower-than-average or higher-than-average treatment effects while producing confidence intervals for these estimates with nominal coverage, despite having searched over many possible subpopulations.

## Acknowledgments

We are grateful for comments provided at seminars at the National Academy of Sciences Sackler Colloquium, the Southern Economics Association, the Stanford Conference on Causality in the Social Sciences, the MIT Conference in Digital Experimentation, Harvard University, University of Washington, Microsoft Research, Facebook, KDD, the AAAI Embedded Machine Learning Conference, the University of Pennsylvania, the California Econometrics Conference, the Collective Intelligence Conference, the University of Arizona, the Paris DataLead conference, Cornell University, Carnegie Mellon University, University of Bonn, University of California, Berkeley, the DARPA conference on Machine Learning and Causal Inference, and the NYC Data Science Seminar Series. Part of this research was conducted while the authors were visiting Microsoft Research.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. Email: athey{at}stanford.edu.

Author contributions: S.A. and G.I. designed research, performed research, contributed new reagents/analytic tools, analyzed data, and wrote the paper.

Conflict of interest statement: The authors received funding from Microsoft Research.

This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “Drawing Causal Inference from Big Data,” held March 26–27, 2015, at the National Academies of Sciences in Washington, DC. The complete program and video recordings of most presentations are available on the NAS website at www.nasonline.org/Big-data.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1510489113/-/DCSupplemental.

## References

- ↵
- ↵
- ↵.
- Imbens G,
- Rubin D

- ↵.
- Hastie T,
- Tibshirani R,
- Friedman J

- ↵.
- Breiman L,
- Friedman J,
- Olshen R,
- Stone C

- ↵
- ↵.
- Tibshirani R

- ↵.
- Vapnik V

- ↵.
- Wager S,
- Athey S

- ↵
- ↵.
- Rosenbaum P,
- Rubin D

- ↵
- ↵.
- Pearl J

- ↵.
- Rosenbaum P

- ↵.
- Beygelzimer A,
- Langford J

- ↵.
- Dudik M,
- Langford J,
- Li L

*Proceedings of the 28th International Conference on Machine Learning*(International Machine Learning Society). - ↵.
- Sigovitch J

- ↵.
- Weisberg HI,
- Pontes VP

- ↵
- ↵.
- Su X,
- Tsai C,
- Wang H,
- Nickerson D,
- Li B

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵.
- Green D,
- Kern H

- ↵.
- Taddy M,
- Gardner M,
- Chen L,
- Draper D

- ↵.
- Van Der Laan M,
- Rose S

- ↵.
- Rosenblum M,
- Van der Laan MJ

- ↵.
- Wager S,
- Walther G

- ↵.
- List J,
- Shaikh A,
- Xu Y

## Citation Manager Formats

### More Articles of This Classification

### Related Content

- No related articles found.