NOTE: For a (beta) version of planning for factoral designs: https://the-small-s-scientist.blogspot.com/2018/11/contrasts-factorial-tutorial.html
NOTE: I've updated the app with a few corrections, so there is a new version. (The November version has corrected degrees of freedom for the 3 and 4 condition within design).
If you like to run the app in R, install the shiny and devtool packages and run the following:
library(shiny) library(devtools)
source_url("https://git.io/fpI1R")
shinyApp(ui = ui, server = server)
Specifying Target MoE and Assurance
Target MoE should be specified in a number of standard deviations (usually a fraction; for details see Cumming, 2012; Cumming & Calin-Jageman, 2017). The symbol f will be used to refer to this standardized MoE. Target MoE (f) must be larger than zero (f will be automatically set to .05 if you accidentally fill in the value 0).
I suggest using the following guidelines for target MoE (f):
Description | f |
---|---|
Extremely Precise | .05 |
Very Precise | .10 |
Precise | .25 |
Reasonably Precise | .40 |
Borderline Precise | .65 |
You should only use these guidelines if you lack the information you need for specifying a reasonable value for Target MoE.
Assurance is the probability that (to be) obtained MoE will be no larger than Target MoE. I suggest setting Assurance minimally at .80.
Specifying the Design
The app works with independent and dependent designs for 2, 3, and 4 conditions. With 2 conditions, the analysis is equivalent to the independent and dependent t-tests, with more than two conditions the analysis is equivalent to one-way independent ANOVA or dependent ANOVA.
Specifying the Cross-Condition correlation
If you choose the dependent design, you also need to specify a value for the cross-condition correlation. This value should be larger than zero. One of the assumptions underlying the app, is that there is only 1 observation per participant (or any other unit of analysis). That is why I like to think of this correlation as (conceptually related to) the reliability of the participant scores (averaged over conditions). From that perspective, a correlation around .60 would be borderline acceptable and around .80 would be considered good enough. So, for worst-case scenarios use a correlation smaller than .60, and for optimistic scenario's correlations of .80 or larger.
Note: for technical reasons a correlation of 1 will be automatically changed to .99.
For independent designs the correlation should equal 0. (And the above story about reliability does no longer make sense; but we also do not need it).
Specifying Contrasts
Contrasts must obey the following rules.
The output is as follows:
The results give you the sample size for each condition (n), and information about target MoE (f), assurance (assu), the number of conditions (k), and the cross-condition correlation (cor; the value is zero, as it should be in the independent design). With n = 55, there is a 80% probability that f will not be larger than .40.
If you use 55 participants per group the expected MoE equals 0.38.
Of course, using 55 participants per group, makes the total sample size equal to 110.
The output also included a plot of the expected results (what you can expect to happen on average). See Figure 3.
This output helps you to consider whether the Expected MoE is small enough. Suppose, for instance, that true difference equals .5 standard deviations, i.e. a medium effect. The figure shows that the expected contrast estimate is a medium effect, and the confidence interval shows that on average values ranging from small to large effects [.12, .88] will be included in the interval. If the difference between small, medium and large effects is important, an expected precision of f = .38 may not be enough, although small and large effects are at the limits of the confidence interval.
A four groups dependent design
Technical Note: The app assumes that the sum of squares of the Error Variance can be decomposed in (k - 1) equal parts, where k is the number of conditions. I will change this restriction in a future version of the app. For a custom contrast it is assumed that the contrast is part of an orthogonal set.
Suppose your major interest is the comparison between the average of two groups and the average of two other groups. You have a dependent (repeated measures) design in which participants will be exposed to each of the four treatment conditions. Let's plan for a target MoE of f = 0.25, with 80 % assurance and let's suppose our cross-condition correlation equals r = .70. I choose a custom contrast with weights {1/2, 1/2, -1/2, -1/2} (see Figure 4).
The output is as follows.
So, we need 26 participants to have 80% assurance that obtained MoE will not be larger than f = 0.25. Expected MoE is equal to .22. According to the guidelines above, this is a precise estimate.
If you choose "Helmert Contrasts" instead, and press the button without changing anything, the output is as follows.
Under Expected Moe you will see for each of the three contrasts c1, c2, and c3, the weights and expected MOE. The 46 participants give an expected MoE smaller than target MoE, for the least precise estimate (c3; the pairwise comparison) the other expected MoE's are smaller than that. The Expected Results Figure will display the results for the contrasts with the largest expected MoE.
Specifying the Cross-Condition correlation
If you choose the dependent design, you also need to specify a value for the cross-condition correlation. This value should be larger than zero. One of the assumptions underlying the app, is that there is only 1 observation per participant (or any other unit of analysis). That is why I like to think of this correlation as (conceptually related to) the reliability of the participant scores (averaged over conditions). From that perspective, a correlation around .60 would be borderline acceptable and around .80 would be considered good enough. So, for worst-case scenarios use a correlation smaller than .60, and for optimistic scenario's correlations of .80 or larger.
Note: for technical reasons a correlation of 1 will be automatically changed to .99.
For independent designs the correlation should equal 0. (And the above story about reliability does no longer make sense; but we also do not need it).
Specifying Contrasts
Contrasts must obey the following rules.
- The sum of the contrast weights must equal zero;
- The sum of the absolute values of the contrast must be equal to two.
If the contrast weights confirm to these rules the resulting estimate is a difference between two or more means expressed on the scale of the variable (see Kline, 2013 for more information on contrasts).
The contrast estimate is simply the sum of the condition means multiplied by the contrast weights. For instance, with four condition means M1, M2, M3, M4, and contrast weights {0.5, 0.5, -0.5, -0.5}, the value of the contrast estimate is the sum 0.5M1 + 0.5M2 + -0.5M3 + -0.5M4 = (.5M1 + .5M2) - (.5M3 + .5M4) = ( M1 + M2) / 2 - (M3 + M4) / 2: the value of the contrast estimate is the difference between the mean of the first two conditions and the mean of the last two conditions.
With more than 2 conditions, the app let's you choose between "Custom contrast" and "Helmert Contrasts".
If you choose "Custom contrast" the app plans for precision of just that contrast. You will get the sample size needed and a figure of the expected results (see below). The default values give you the weights for a pair wise comparison of two of the conditions. You can simply type over these default values.
If you choose "Helmert contrasts" the app will give you an orthogonal set of Helmert contrasts as default values. You can simply type over these default values to get any contrast you like, but you cannot specify more contrasts than the number of conditions minus one.
If you choose "Helmert contrasts" the app will plan for the sample size of the contrast with the lowest precision. If you use the default values this will be the contrast specifying a pairwise comparison. For a set of contrasts the pair wise comparison estimate will be the least precise so if you know the sample size needed for a precise pairwise comparison, you know that the precision you will get for the other contrasts will be just as precise or more precise. The planning results will show the expected value for MoE for all contrasts, but the figure will only display expected results for the least precise contrast estimate.
Examples
Two groups independent design
I use the default values (see Figures 1 and 2). And click the "Get Sample size " button.
Figure 1. Values for Target MoE, assurance and design |
Figure 2: Standard contrasts for comparison of two conditions |
The output is as follows:
The results give you the sample size for each condition (n), and information about target MoE (f), assurance (assu), the number of conditions (k), and the cross-condition correlation (cor; the value is zero, as it should be in the independent design). With n = 55, there is a 80% probability that f will not be larger than .40.
If you use 55 participants per group the expected MoE equals 0.38.
Of course, using 55 participants per group, makes the total sample size equal to 110.
The output also included a plot of the expected results (what you can expect to happen on average). See Figure 3.
Figure 3: Expected results using n = 55 participants per group in the two groups independent design |
This output helps you to consider whether the Expected MoE is small enough. Suppose, for instance, that true difference equals .5 standard deviations, i.e. a medium effect. The figure shows that the expected contrast estimate is a medium effect, and the confidence interval shows that on average values ranging from small to large effects [.12, .88] will be included in the interval. If the difference between small, medium and large effects is important, an expected precision of f = .38 may not be enough, although small and large effects are at the limits of the confidence interval.
A four groups dependent design
Technical Note: The app assumes that the sum of squares of the Error Variance can be decomposed in (k - 1) equal parts, where k is the number of conditions. I will change this restriction in a future version of the app. For a custom contrast it is assumed that the contrast is part of an orthogonal set.
Suppose your major interest is the comparison between the average of two groups and the average of two other groups. You have a dependent (repeated measures) design in which participants will be exposed to each of the four treatment conditions. Let's plan for a target MoE of f = 0.25, with 80 % assurance and let's suppose our cross-condition correlation equals r = .70. I choose a custom contrast with weights {1/2, 1/2, -1/2, -1/2} (see Figure 4).
Figure 4. Input for sample size planning |
The output is as follows.
If you choose "Helmert Contrasts" instead, and press the button without changing anything, the output is as follows.
Under Expected Moe you will see for each of the three contrasts c1, c2, and c3, the weights and expected MOE. The 46 participants give an expected MoE smaller than target MoE, for the least precise estimate (c3; the pairwise comparison) the other expected MoE's are smaller than that. The Expected Results Figure will display the results for the contrasts with the largest expected MoE.
No comments:
Post a Comment