Skip to content

Commit

Permalink
Update conditioning.qmd
Browse files Browse the repository at this point in the history
fixed typo
  • Loading branch information
ngreifer authored Dec 8, 2023
1 parent 8ecbd6e commit dc1b15d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion conditioning.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ $$

Overlap weights upweight units most like those in the other treatment group. Though there are other methods to compute weights that target an overlap sample (e.g., the "matching weights" of @liWeightingAnaloguePair2013), overlap weights tend to outperform them and produce a weighted sample with the most precision of any propensity score weights.

A choice that researchers must make when using propensity score weighting is how to estimate the propensity score. This choice affects the properties of the weights (i.e., the balance they induces and the precision in the weighted sample). The most common method is a logistic regression of the treatment on the covariates. Other popular methods involve machine learning methods like generalized boosted modeling (GBM) [@mccaffreyPropensityScoreEstimation2004], Bayesian additive regression trees (BART) [@hillChallengesPropensityScore2011], and Super Learner [@alamShouldPropensityScore2019], though any model that produces predicted class probabilities can be used to estimate propensity scores. Often, versions of these methods incorporate balance optimization into the estimation of the weights; for example, a popular implementation of GBM chooses the value of a tuning parameter as that which minimizes an imbalance statistic [@mccaffreyPropensityScoreEstimation2004]. Logistic regression has a particular benefit when using overlap weights: the covariate means will be exactly balanced between the treatment groups.
A choice that researchers must make when using propensity score weighting is how to estimate the propensity score. This choice affects the properties of the weights (i.e., the balance they induce and the precision in the weighted sample). The most common method is a logistic regression of the treatment on the covariates. Other popular methods involve machine learning methods like generalized boosted modeling (GBM) [@mccaffreyPropensityScoreEstimation2004], Bayesian additive regression trees (BART) [@hillChallengesPropensityScore2011], and Super Learner [@alamShouldPropensityScore2019], though any model that produces predicted class probabilities can be used to estimate propensity scores. Often, versions of these methods incorporate balance optimization into the estimation of the weights; for example, a popular implementation of GBM chooses the value of a tuning parameter as that which minimizes an imbalance statistic [@mccaffreyPropensityScoreEstimation2004]. Logistic regression has a particular benefit when using overlap weights: the covariate means will be exactly balanced between the treatment groups.

Many modern methods skip the step of estimating a propensity score and estimate the weights directly. Examples of this approach include entropy balancing [@hainmuellerEntropyBalancingCausal2012; @zhaoEntropyBalancingDoubly2017], stable balancing weights [@zubizarretaStableWeightsThat2015], and energy balancing [@hulingEnergyBalancingCovariate2022]. This distinction is explored in detail by @chattopadhyayBalancingVsModeling2020. A popular weighting method, covariate balancing propensity score (CBPS) weighting, combines optimization and logistic regression-based propensity score estimation [@imaiCovariateBalancingPropensity2014] (though this does not necessarily confer any benefits over methods that don't estimate a propensity score [@liPropensityScoreAnalysis2021a]). These optimization-based methods often exactly or approximately balance features of the distribution of covariates while retaining precision in the weighted sample, making them highly effective[^conditioning-1].

Expand Down

0 comments on commit dc1b15d

Please sign in to comment.