# What Are Delta Method Standard Errors?

Jeremy Albright

Posted on
Logistic Predicted Probabilities Delta Method

The delta method is an approach to performing inference on statistics for which the Central Limit Theorem does not guarantee a normal distribution. This is commonly the case for statistics that are functions of other (possibly normally distributed) statistics. For example, logistic regression estimates coefficients that are linear on a log-odds scale, but it is common to transform the coefficients into odds ratios or to calculate predicted probabilities because log-odds are difficult to intuit. An odds ratio is a function of a single coefficient, i.e. $$\text{OR} = f(\hat{\beta}) = e^{\hat{\beta}}$$, and predicted probabilities are a more complicated function of all of the coefficients. The challenge is determining a standard error for these transformations that makes statistical inference possible. Delta method standard errors are calculated by finding an alternative function that approximates the function in which we are truly interested. This approximation will be normally distributed, which then makes it possible to estimate standard errors, confidence intervals, and $$p$$-values.

The following post walks step-by-step through the process within the familiar context of odds ratios and predicted probabilities from logistic regression.

## Review of Logistic Regression Interpretation

Logistic regression produces result that are typically interpreted in one of two ways:

1. Odds ratios
2. Predicted probabilities

Odds are the ratio of the probability that something happens to the probability it doesn’t happen.

$\Omega(X) = \frac{p(y=1|X)}{1-p(y=1|X)}$ An odds ratio is the ratio of two odds, each calculated at a different score for $$X$$.

There are strengths and weaknesses to either choice.

1. Predicted probabilities are intuitive, but require assuming a value for every covariate.
2. Odds ratios do not require specifying values for other covariates, but ratios of ratios are not always intuitive.

The following is an illustration using 2016 ANES data on whether a voter cast a ballot for Trump or Clinton. See a prior blog post for more extensive details.

trump_model <- glm(vote ~ gender + educ + age, data = anes,

estimates <- round(coef(trump_model), 3)

results_tab <- tidy(trump_model) %>%
mutate_if(is.numeric, funs(round(.,3))) %>%
var_mapping(term) %>%
dplyr::rename(Estimate = term, Beta = estimate, SE = std.error, z = statistic, p = p.value)

kable(results_tab, align = c("l",rep("c",4)))
Estimate Beta SE z p
Constant -1.051 0.255 -4.127 0.000
Female -0.374 0.085 -4.384 0.000
Completed HS 0.655 0.231 2.839 0.005
College < 4 Years 0.696 0.218 3.195 0.001
College 4 Year Degree 0.411 0.222 1.853 0.064
Advanced Degree -0.424 0.229 -1.850 0.064
Age 0.015 0.002 6.026 0.000

Interpreting as odds ratios:

• The odds of voting for Trump are $$100\times[\mbox{exp}(-.374) - 1]$$ = 31% lower for women compared to men.
• The odds of voting for Trump are $$100\times[\mbox{exp}(.655) - 1]$$ = 93% higher for those with only a high school diploma compared to those without a high school diploma.
• The odds of voting for Trump are $$100\times [\mbox{exp}(.696) - 1]$$ = 101% higher for those with with some college (but no 4-year degree) compared to those without a high school diploma.
• Each increase in age of one year leads to a $$100\times [\mbox{exp}(.015) - 1]$$ = 2% increase in the odds of voting for Trump.

Interpreting as predicted probabilities:

predict_data <- expand.grid(educ    = levels(anes$educ), gender = levels(anes$gender),
age     = seq(20,90, by = 10))

pred_probs <- predict_data %>%
mutate(prob_trump = predict(trump_model, newdata = predict_data,
type = "response"))

ggplot(pred_probs, aes(x = age, y = prob_trump, group = educ, color = educ)) + geom_line() +
facet_grid(~ gender) + labs(x = "Age (Years)", y = "Probability of Voting for Trump") +
scale_color_discrete(name="Highest\nEducation")