Let’s say I have the following design in which all estimates are informed by prior pilot data. Major thank you to @nfultz for holding my hand through the whole process here: Power calculations in non-standard design

```
ate <- 9.16 # size of coefficient on outgroup from from lm_robust(feelings_change ~ outgroup_pairing, data = data, team = team_id)
# Mean and Standard Deviation for the "pre" feelings measure for control condition
mean_pre_control = 19.6
sd_pre_control = 22.5
# Mean and Standard Deviation for the "post" feelings measure for control condition
mean_post_control = 20.3
sd_post_control = 23.0
# Mean and Standard Deviation for the "pre" feelings measure for treatment condition
mean_pre_treatment = 13.9
sd_pre_treatment = 18.6
# Mean and Standard Deviation for the "post" feelings measure for treatment condition
mean_post_treatment = 23.8
sd_post_treatment = 24.8
pop <- declare_population(N=600,
party=rep(c('D','R'), each=N/2),
# generate draws from a truncated normal since these are "feelings thermometers"
# bounded between 0 and 100
feelings_toward_outgroup_pre = msm::rtnorm(N,
mean = mean_pre_control,
sd = sd_pre_control,
lower = 0, upper = 100),
feelings_toward_outgroup_post = msm::rtnorm(N,
mean = mean_post_control,
sd = sd_post_control,
lower = 0, upper = 100)
)
# Create an "assn_team" function that generates random pairings of DD, RR, RD, and DR
assn_team <- declare_assignment(handler=function(data) {
N <- nrow(data)
D_teams <- c(paste("D", rep(1:(N/8), times=2)), paste("X", 1:(N/4)))
R_teams <- c(paste("R", rep(1:(N/8), times=2)), paste("X", 1:(N/4)))
data$team <- NA
data$team[data$party == 'D'] <- D_teams
data$team[data$party == 'R'] <- R_teams
data$Z <- +grepl("X", data$team)
# Variable that gets the party of one's partner (sanity check)
data$partner_party = with(data, ave(party, team, FUN = rev))
data
})
df <- assn_team(pop())
potential_outcomes <- declare_potential_outcomes(Y_Z_0 = feelings_toward_outgroup_post - feelings_toward_outgroup_pre,
Y_Z_1 = (feelings_toward_outgroup_post - feelings_toward_outgroup_pre) + ate)
df <- potential_outcomes(df)
estimand <- declare_estimand(ATE = mean(Y_Z_1 - Y_Z_0))
reveal <- declare_reveal(Y, Z)
estimator <- declare_estimator(Y ~ party * Z,
estimand = estimand,
model = lm_robust,
clusters = team,
term = "Z")
design <- pop + assn_team + potential_outcomes+ estimand + reveal + estimator
diagnosis <- diagnose_design(design)
```

This results in the following output:

```
Design Label Estimand Label Estimator Label Term N Sims Bias RMSE Power Coverage Mean Estimate SD Estimate Mean Se Type S Rate Mean Estimand
design ATE estimator Z 500 0.12 2.90 0.90 0.94 9.28 2.90 2.86 0.00 9.16
(0.12) (0.09) (0.01) (0.01) (0.12) (0.09) (0.01) (0.00) (0.00)
```

In my actual study, I conduct contrasts like so:

```
model = lm_robust(feelings_change ~ politics * outgroup_pairing,
cluster = team_id,
se = "stata",
data = data)
rg = qdrg(object = model, data = data)
outgroup_means = emmeans(rg, ~ outgroup_pairing | politics)
contrast_text = contrast(outgroup_means, method = "pairwise")
```

```
politics = Democrat:
contrast estimate SE df t.ratio p.value
0 - 1 -11.66 3.33 199 -3.502 0.0006
politics = Republican:
contrast estimate SE df t.ratio p.value
0 - 1 -6.66 3.27 199 -2.040 0.0427
```

I want to make sure that I’m sufficiently powered to be able to conduct these types of contrasts, but cannot decide the optimal way to do so within DeclareDesign, because contrasts are conducted with respect to the model itself. One idea is to re-parametrize so that there are four “treatment” conditions (DD, DR, RD, and RR), but I’m not sure how to make explicit the comparisons between (DD v DR) and (RR vs RD).

Thanks for any assistance in advance.

One random question: I noticed that when I don’t cluster at the team-level, my power actually goes down. This is a bit strange, since I assumed clustering would nearly always inflate standard errors. Is this strange behavior?