# Combining factorial and multi-arm designs

I’ve been intensely reviewing the DeclareDesign documentation, Library, and tutorials, but it would be great to check my understanding with the community before I dive in.

I would like to declare a design wherein respondents are shown a factorial vignette with .5 probability; for those who see the vignette, there are k factors with two levels each, with probabilities of .5. My primary questions are:

1. Is there enough power to compare outcomes for interactions between the factors in the vignette?
2. Is there enough power to compare outcomes for those who do not see the vignette (control) with each unique combination of factorial treatments?

My thinking is that I could use factorial_designer() to answer question #1 about power for the factorial vignette. Then, I could use multi_arm_designer() to answer question #2, with (1 + 2^k) arms (1 for control and 2^k for each unique combination of factorial treatments).

Does this make sense? Thanks!

I wouldn’t run separate designs, instead would recommend creating a design specific to this use case, becuase you actually have the following split:

50% control
12.5% Vignette A1
12.5% Vignette A2
12.5% Vignette B1
12.5% Vignette B2

Instead, I would set this up directly, with conditional estimands / estimators within the treated group (by using the `subset` option) to address Part 1, and regular estimands/estimators for part 2. This kind of design is pretty common in online advertising.

If k is particularly large, I’d probably also suggest correcting for multiple tests. I’m not aware of anyone using DD for k > 4, so do let us know if / how well the software works.

would something like the following work for you?

``````library(DeclareDesign)
library(tidyverse)

design <-
declare_population(N = 100000) +
declare_potential_outcomes(Y ~ rbinom(
n = N,
size = 1,
prob = case_when(
Z == 0 ~ 0.5,
Z == 1 & F1 == 0 & F2 == 0 ~ 0.53,
Z == 1 & F1 == 1 & F2 == 0 ~ 0.56,
Z == 1 & F1 == 0 & F2 == 1 ~ 0.59,
Z == 1 & F1 == 1 & F2 == 1 ~ 0.62
)
),
conditions = list(
"Z" = c(0, 1),
"F1" = c(0, 1),
"F2" = c(0, 1)
)) +
declare_assignment(prob = 0.5, assignment_variable = "Z") +
declare_assignment(blocks = Z, block_prob = c(0, 0.5), assignment_variable = "F1") +
declare_assignment(blocks = Z, block_prob = c(0, 0.5), assignment_variable = "F2") +
declare_estimator(Y ~ Z + F1*F2, model = lm_robust, term = TRUE )

draw_data(design)
run_design(design)
``````

Thanks @Alex_Coppock! (and @nfultz too - I found your comment helpful but I was struggling to code it up.)

I think this code makes sense. Let me check my understanding. In your code, the potential outcomes are on a binomial distribution - meaning 0 to 1 - and the values in the `prob` argument are the outcome means for those treatment conditions.

So if I wanted potential outcomes on a normal distribution (Likert scale, let’s say), I could swap out `rbinom` for `rnorm` and use arguments for mean and sd rather than size and prob.

Exactly! Or use draw_likert() - https://declaredesign.org/r/fabricatr/articles/common_social.html#likert-data-1