Pagina's

Thursday 21 June 2018

A rule of thumb for setting target MOE

One of the most difficult aspects of sample size planning for precision is the specification of a target Margin of Error (MoE). Here, I would like to introduce a simple rule of thumb, in the hope that it helps you in determining a reasonable target MoE.
Here, the rule of thumb is applied to obtaining an estimate of the difference between two independent group means, where the two populations are normally distributed with equal variances.

Goal 1: Assessing the direction of an effect

Sample size planning starts with formulating a goal for the research. A very common goal is to try to determine the direction of an effect. For the goal of assessing the direction of an effect, it helps if the confidence interval of the difference contains only positive or negative values. That is, you want a confidence interval that exludes the value 0, for if that value is included, you would probably conclude that the estimate is consistent with both positive and negative effects. Thus, our first goal is to obtain a confidence interval of the mean difference that excludes the value 0.

Now, a confidence interval excludes 0, if obtained MOE is at most equal to the obtained effect size estimate. Suppose that the estimate equals the true effect of say, 0.50, we want MOE to be at most very close to 0.50, otherwise 0 will be included in the interval. But if our estimate underestimates the true effect, say the estimate equals 0.30, we want MOE to be at most very close to 0.30. Likewise, if we overestimate the effect, MOE can be larger than 0.50.

This means that we cannot say, for instance, we expect that the true effect is .50, so let's plan for a target MOE that with 80% assurance is at most .50, because this target MOE may be too large for underestimates of the true effect, depending on the extent to which the effect is underestimated. So, in specifying target MOE, we should take into account that underestimates of the effect size occur. (Actually, these underestimates occur with a relative frequency of 50% in a huge collection of direct replications). We can say that we do not only want to exclude zero from the interval, but also that we want that to occur in a large proportion of direct replications. This will be our second goal. I will call the probabiity associated with our second goal, the probability of exclusion (PE)

The rule of thumb is that if we want 80% probability that a random confidence interval excludes zero, we should plan for an expected MOE equal to f = d / √2. (the square root sign is unreadable in my browser; so in words: the effect size divided by the square root of 2; with mathjax: $f = d / \sqrt{2}$). Since there is 50% probability that obtained MOE will be larger than expected MOE, this is equal to planning for target MOE = f = d / √2, with 50% assurance or simply without assurance. You can do this in the ESCI-software, but also with the R-functions provided below.

The first example in the code below, is an illustration of planning for assessing the direction of the effect, with true effect size d = .50. If we want 80% assurance to have only positive values in our confidence interval, we should plan for a target MoE = expected MoE = f = d / √2 = 0.3535. Using the SampleSize-function below, this gives a sample size n = 63, or total sample size = N = 2*63 = 126. The probability that the confidence interval excludes 0 equals approximately 80% (p = 0.7951). So, the rule of thumb of planning for d / √2, seems to work pretty good.

Goal 2: distinguishing between effect sizes

If your research goal is to estimate the value of the effect size in stead of its direction, the rule of thumb can be used as follows. Suppose we do not know the true effect size, but want to have 80% assurance that we have a high probability to be able to distinguish between small (d = .20) and large effects (d = .80). That is, if the true effect is .20 we want the value .80 to be excluded from the confidence interval and if the true effect is .80, we want the value .20 to be excluded from the confidence interval.

We can proceed as follows, the difference between the effect sizes is .80 - .20 = .60. We use this value to determine target MOE. Thus, if we now plan for a target MoE = expected Moe = d / √2), we should have approximately 80% PE that obtained MoE will exclude 0.80 if the true effect is 0.20 and vice versa. The functions below give sample size n = 44, and the probablity of exclusion equals .7947. So, our rule of thumb, seems to work pretty good again. See example 2 in the code below.

Alternatively, we could take the region of practical equivalence (ROPE) into account. Suppose, our equivalence range equals .10 sigma. If we want to have enough precision to distinguish large from small effects, we should plan as follows. We take the difference between a large effect and the upper equivalence value of a small effect or, equivalently, the difference between a small effect and the lower equivalence vaue of a large effect, i.e. .50, and plan for f = .50 / √2. If the effect is large we expect a confidence interval that excludes the equivalence range for the small effect (and vice versa), with 80% probability of exclusion.

But we could also take the difference between the lower equivalence value of a large effect and the upper equivalence value of a small effect, i.e. .40, and plan for f = .40/√2. (See the third example in the code below) This will give us 80% PE that any true value within the ROPE of the one effect will exclude values in the ROPE of the other. For example, if the true effect is .70, and expected MOE equals .40/√2 = .2828, there is approximately 80% probability that the 95% CI excludes .30, which is in the ROPE of a small effect. The expected CI will be .70 +/- .2828 = [0.4172, 0.9828]. Note that the lower limit is larger than the upper limit of the ROPE for d = .20, as we want it to be. Note, however, that if the true effect is small (d = .20), the CI will exclude effects equivalent to large effects, which is consistent with our research goal, but it will not exlude the value 0 or effects equivalent to a medium effect. Indeed, the expected CI will be [-0.0828, 0.4828]. (This is not a problem, of course, since this was not the purpose of our research)

As a final example, suppose we want sufficient precision to distinguish small from medium effects (or large from medium effects). If we take the ROPE perspective, with an equivalence range of +/- .10 sigma, the lower equivalence value of the medium effect equals .50 - .10 = .40 and the upper limit of the small effect equals .30. If we want 80% assurance that the CI will be small enough to distinguish small from medium effects, we should plan for expected MOE f = (.40 - .30)/√2 = 0.0707. Using the functions below, this requires a sample size n = 1538. (See the final example in the code below).

Setting target MOE: conclusion

In summary, the rule of thumb is to divide the effect size d by √2 and plan for an expected MoE equal to this value. This will give you a sample size that gives approximately 80% assurance that the CI will not contain 0. In the case of distinguishing effect sizes, one option is to divide the difference between the lower equivalence value of the larger effect and the upper equivalence value of the smaller effect by the square root of 2 and plan for an expected MoE equal to this value. This will give you a sample size that gives approximately 80% PE that the CI of the estimated true value of one effect excludes the values in the ROPE of the other effect.

Do you want at least 90% PE? Use the square root of three, in stead of the square root of two, in determining target MoE.

eMoe = function(n) {
eMoe = qt(.975, 2*(n - 1))*sqrt(2/n)
return(eMoe)
}

cost <- function(n, tMoe) {
(eMoe(n) - tMoe)^2
}

sampleSize <- function(tMoe) {
optimize(cost, interval=c(10, 5000), tMoe = tMoe)$minimum
}

# FIRST EXAMPLE
# plan for 80% assurance of excluding 0
# i.e. estimate the direction if true effect
# equals .50 

d = .50

#application of rule of thumb:
f = .50 / sqrt(2)

#sampleSize (uses ceiling() to round up): 
n = ceiling(sampleSize(f))
n
## [1] 63
# Probabiity of Exclusion (here taken to be equivalent to
# power for two-sided t-test (since true direction is unknown))
df = 2*(n - 1)
ncp = f / sqrt(1/n) #or ncp = d / sqrt(2/n)

pt(qt(.025, df), df, ncp) + 1 - pt(qt(.975, df), df, ncp)
## [1] 0.7951683
# SECOND EXAMPLE: 
# distinguish between small and large effect sizes: 
d = .80 - .20
f = d / sqrt(2)

n = ceiling(sampleSize(f))
n
## [1] 44
df = 2*(n - 1)
ncp = f / sqrt(1/n) #or ncp = d / sqrt(2/n)

#PE: 

pt(qt(.025, df), df, ncp) + 1 - pt(qt(.975, df), df, ncp)
## [1] 0.79467
# EXAMPLE 3: distinguish small and large with ROPE
# ROPE small and large: 
rope.small = c(.10, .30)
rope.large = c(.70, .90)

d = rope.large[1] - rope.small[2]
f = d / sqrt(2)

n = ceiling(sampleSize(f))

n
## [1] 98
df = 2*(n - 1)
ncp = f / sqrt(1/n) #or ncp = d / sqrt(2/n)

#PE: 

pt(qt(.025, df), df, ncp) + 1 - pt(qt(.975, df), df, ncp)
## [1] 0.7956414
# Example 4: distinguish medium from small 
# or medium from large with ROPE

rope.medium = c(.40, .60)
d = rope.medium[1] - rope.small[2]
f = d / sqrt(2)

n = ceiling(sampleSize(f))

n
## [1] 1538
df = 2*(n - 1)
ncp = f / sqrt(1/n) #or ncp = d / sqrt(2/n)

#PE:

pt(qt(.025, df), df, ncp) + 1 - pt(qt(.975, df), df, ncp)
## [1] 0.7916783