I am using your wonderful `lm_robust`

command in order to generate cluster robust standard errors. I’m running an experiment where I have four treatment conditions within one experiment (let’s label the possible treatment conditions as MM, FM, MF, and FF) and the same treatment conditions in two ‘control’ experiments.

I have coded a variable called ‘treatment’ to be a factor variable with those four levels, and a factor variable called ‘condition’ to take on one of three values: ‘Main_Experiment, Control_1, and Control_2.’ I want to test whether, for example, FM in Main_Experiment is > FM in Control_1 or Control_2.

My current set up looks like the following:

```
lm_robust(outcome ~ treatment:condition + education + income + sex + birth + race + trust_daily,
data = all_data,
clusters = team,
se = "stata"
)
```

I’d like to be able to compare specific levels of factors across conditions using something like TukeyHSD, as in the following:

```
aov_model <- aov(outcome~ treatment*condition + education + income + sex + birth + race + trust_daily + turk_experience, data = all_data)
res <- TukeyHSD(my_lm, which = c('treatment:condition'))
as.data.frame(tidy(res,my_lm)) %>%
filter(adj.p.value < .05)
```

However, TukeyHSD doesn’t accept lm_robust models. Trying to use something like linearHypothesis in the car library is also difficult because lm_robust will drop one of the levels due to perfect collinearity, so I can’t get multiple-comparisons adjusted pairwise mean t-tests without cycling through every possible reference group.

Is there a better way of doing this kind of comparison within the lm_robust framework? I’ve been working on this problem for over a month, and would really appreciate it.