Pagina's

Saturday, 14 October 2017

The omnibus F-test may be ignored if you use multiple comparison procedures


I think  trying to be scientific with a small s involves asking critical questions about  common wisdom or common practice. In this post, I would like to focus on multiple comparisons in the context of ANOVA. What does common practice indicate?

Common wisdom suggests doing multiple comparisons only if the F-test is significant


Let's have a look on some practical advice considering multiple comparisons found on the web (R-bloggers.com) and in Field (2015). 

"One way to begin an ANOVA is to run a general omnibus test. The advantage to starting here is that if the omnibus test comes up insignificant, you can stop your analysis and deem all pairwise comparisons insignificant. If the omnibus test is significant, you should continue with pairwise comparisons" (https://www.r-bloggers.com/r-tutorial-series-one-way-anova-with-pairwise-comparisons/)

"When we have a statistically significant effect in ANOVA and an independent variable of more than two levels, we typically want to make follow-up comparisons. There are numerous methods for making pairwise comparisons and this tutorial will demonstrate how to execute several different techniques in R." (https://www.r-bloggers.com/r-tutorial-series-anova-pairwise-comparison-methods/)
And have a look at how the text book I used to use in my statistics course explains it.

"It might seem a a bit unhelpful that an ANOVA doesn't tell you which groups are different from which, given that having gone to the trouble of running an experiment, you probably need to know more than 'there's some difference somewhere or other'. You might wonder, therefore, why we don't just carry out a lot of t-tests, which would tell us very specifically whether pairs of group menas differ. Actually, the reason has already been explained in Section 2.1.6.7: every time you run multiple tests on the same data you inflate the potential Type I errors that you make. However, we'll return to this point in Section 11.5 when we look at how we follow up an ANOVA to discover where the group difference lie." (Field, 2015, p. 442).
Although, in honesty, on p. 459 Field writes:

"The least significance difference (LSD) pairwise comparison makes no attempt to control Type I error and is equivalent to performing multiple t-tests on the data. The only difference is that LSD requires the overall ANOVA to be significant."

This is meant to inform about the relative merits of one post hoc procedure to another in terms of Type I and Type II error.  Crucially, it is not mentioned that the other post-hoc procedures require that the overall ANOVA be significant. (As common wisdom seems to suggest). However,  his flow-chart of the ANOVA procedure (p. 460) clearly suggests multiple comparison procedures should be used as post-hoc procedures (after the ANOVA is significant).

Thus, common "statistical" wisdom seem to suggest that multiple comparison procedures are to be used as post hoc procedures following up a significant omnibus F-test. And the reason is that this two-stepped procedure minimizes the probability of type I errors.

Now, let's ask ourselves whether this common sense is, well, sensible.

Multiple comparisons only after significant F-test affects power negatively


Wilcox (2017) contains some useful information regarding our question. In his discussion of the much used Tukey-HSD procedure (the Tukey-Kramer Method), he references Bernhardson, (1975) who shows that the probability of at least 1 type I error among pairwise comparisons of estimates of equal population means (i.e. true null-hypotheses) is no longer equal to $\alpha$ if the procedure is only carried out following a significant omnibus test. That is, if we use our beloved two step procedure.

The consequence of the two step procedure for the Tukey-HSD is that $\alpha$ is reduced. Thus, if we want our multiple comparisons procedure to generate one type I error or more at most with a probability of  $\alpha = .05$, using the 2 step procedure leads to a lowered $\alpha$. This is of course, bad news, because in the event that not all of the null-hypotheses are true, lowering $\alpha$ increases $\beta$, the probability of not rejecting when the null-hypothesis is false (keeping the sample size constant, of course). In other words, the two step procedure decreases the power of the multiple comparison procedure.

In the words of Wilcox (2017):

"In practical terms, when it comes to controlling the probability of at least one type I error, there is no need to first reject with the ANOVA F test to justify using the Tukey-Kramer method. If the Tukey-Kramer method is used only after the F test rejects, power can be reduced. Currently, however, common practice is to use the Tukey-Kramer method only if the F-test rejects. That is, the insight reported by Bernhardson is not yet well known."  (p. 385).

In conceptual terms,  the fact that the probability of at least one type I error in the multiple comparison procedure is  smaller than $\alpha$ if the F-test rejects is pretty clear, at least to me it is. Suppose we reject if the p-value of the F-test is smaller or equal to 5%. This will also be the probability that we conduct the multiple comparison test over repeated replications of the same experiment. Of that 5%, not every application of the procedure will result in at least one type I error. Indeed, a puzzling fact for many beginning researchers is that the F-test is significant while none of the pairwise comparisons is. In other words, some of those 5%  of the cases in which we perform the procedure following a significant F-test will probably not reject any of the pairwise null-hypotheses, unless it is guaranteed that at least one type I error per application will be made.

(With no adjustment of $\alpha$ for multiple comparisons, this will happen (with high probability so no guarantee) if a huge number of pairwise comparisons are made. For instance, with 99 unadjusted multiple comparisons the probability of at least one type I error is 99%.; this is why it makes sense to demand that the F-test is significant before testing multiple comparisons with the LSD procedure. Although the latter seems to run into trouble with more than 3 groups (Wilcox, 2017).

A quick simulation study


My hunch is that the two-step procedure is unnecessary for the Tukey-Kramer method as well as for other multiple comparison procedures (the exception Fisher's LSD procedure which was designed as a post hoc procedure to be used as a follow up after a significant F-test, as Field (2015) rightly points out), but I only focused on the Tukey-Kramer method. What I did was a simple simulation study with a four group between subjects design (all $\mu$'s equal) and estimated the probability of at least type I error both with and without using the 2 step procedure.

set.seed(456)
#number of groups
ngr = 4

#number of participants
n = 40

#group is a factor
gr <- factor(rep(1:ngr, each=n))

#vector for storing rejections F-test
Reject <- rep(0, 10000)

#vector for storing #rejections multiple
#comparisons
RejectHSD <- rep(0, 10000)

for (i in 1:10000) {

y = rnorm(ngr*n)
mod = aov(y ~ gr)
Reject[i] = anova(mod)$"Pr(>F)"[1] <= .05
PS <- TukeyHSD(mod)$gr[,4]
RejectHSD[i] = sum(PS <=.05)
}

#probability type I error F-test
sum(Reject)/length(Reject)
## [1] 0.0515
#probability at least one type I error Tukey HSD
sum(RejectHSD > 0) / length(RejectHSD)
## [1] 0.0503
#probability at least one type I error given F-tests Rejects
sum(RejectHSD[Reject==TRUE] > 0) / length(RejectHSD)
## [1] 0.0424

Even though a single (relatively tiny) simulation (which, by the way, takes a long time to run, nonetheless), is not necessarily convincing, it does  illustrate the main points of this post. First, the probability of at least one incorrect rejection using the TukeyHSD function is close to .05. With this particular random seed it even performs a little better than the ANOVA F-test: .0503 versus .0515. This illustrates that even without considering whether the omnibus test is significant the main demand of not rejecting too many true null-hypotheses is completely satisfied. So, in practical terms, you can safely ignore the omnibus test if your concerns are about  $\alpha$.

Second, the probability of incorrectly rejecting at least one true pair-wise null-hypothesis after the ANOVA F-test is significant is estimated to be .0424. This shows, that the two-step procedure leads to a larger decrease in the actual type I error probability than is wanted. Even though this may seem good news from the perspective of avoiding type I errors, the down side is that pair wise null-hypotheses that are false (and potentially important) may not be detected.

Conclusion


Common wisdom and practice suggest that multiple comparisons procedures should be done only after a significant omnibus test. We have seen that this is not at all necessary if we use a multiple comparisons procedure that is designed to control the type I error probability. To my knowledge, most of the procedures conventionally thought of as post hoc tests are designed in this manner, the exception being the LSD procedure which does require a significant F-test. For practical purposes, then, do not bother with the omnibus test (note the exception) if you are planning to pair wise compare all the treatment means. 

This practical advice does not mean, of course, that I am suggesting you spend your time comparing all treatment means. Most of the time, focused comparisons are a more fruitful way of analysing your data. But I'll leave that topic for another time. 

References

Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics. 4th Edition. London: Sage. 
Wilcox, R. (2017). Understanding and Applying Basic Statistical Methods Using R. Hoboken, NJ: Wiley,  

Monday, 9 October 2017

Planning for a precise slope estimate in simple regression

In this post, I will show you a way of determining a sample size for obtaining a precise estimate of the slope $\beta_1$of the simple linear regression equation $\hat{Y_i} = \beta_0 + \beta_1X_i$. The basic ingredients we need for sample size planning are a measure of the precision, a way to determine the quantiles of the sampling distribution of our measure of precision, and a way to calculate sample sizes.

As our measure of precision we choose the Margin of Error (MOE), which is the half-width of the 95% confidence interval of our estimate (see: Cumming, 2012; Cumming & Calin-Jageman, 2017; see also www.thenewstatistics.com).

 

The distribution of the margin of error of the regression slope

In the case of simple linear regression, assuming normality and homogeneity of variance, MOE is $t_{.975}\sigma_{\hat{\beta_1}}$, where $t_{.975}$, is the .975 quantile of the central t-distribution with $N - 2$ degrees of freedom, and $\sigma_{\hat{\beta_1}}$ is the standard error of the estimate of $\beta_1$. 

An expression of the squared standard error of the estimate of $\beta_1$ is $\frac{\sigma^2_{Y|X}}{\sum{(X_i - \bar{X})}^2}$ (Wilcox, 2017): the variance of Y given X divided by the sum of squared errors of X. The variance $\sigma^2_{Y|X}$ equals $\sigma^2_y(1 - \rho^2_{YX})$, the variance of Y multiplied by 1 minus the squared population correlation between Y and X, and it is estimated with the residual variance $\frac{\sum{(Y - \hat{Y})^2}}{df_e}$, where $df_e = N - 2$.

The estimated squared standard error is given in (1)
$$\hat{\sigma}_{\hat{\beta_{1}}}^{2}=\frac{\sum(Y-\hat{Y})^{2}/df_{e}}{\sum(X-\bar{X})^{2}}. \tag{1} $$

With respect to the sampling distribution of MOE, we first note the following. The distribution of estimates of the residual variance in the numerator of (1) is a scaled $\chi^2$-distribution:

$$\frac{\sum(Y-\hat{Y})^{2}}{\sigma_{y}^{2}(1-\rho^{2})}\sim\chi^{2}(df_{e}),$$

thus
$$\frac{\sum(Y-\hat{Y})^{2}}{df_{e}}\sim\frac{\sigma_{y}^{2}(1-\rho^{2})\chi^{2}(df_{e})}{df_{e}}.$$

Second, we note that
$$\frac{\sum(X-\bar{X})^{2}}{\sigma_{X}^{2}}\sim\chi^{2}(df),$$

where $df = N - 1$, therefore

$$\sum(X-\bar{X})^{2}\sim\sigma_{X}^{2}\chi^{2}(df).$$

Alternatively, since $\sum{(X - \bar{X})^2} = df\sigma^2_X$, and multiplying by 1 ($\frac{df}{df}$). 

$$df\sigma_{X}^{2}\sim df\sigma_{X}^{2}\chi^{2}(df)/df.$$

In terms of the sampling distribution of (1), then, we have the ratio of two (scaled) $\chi^2$ distributions, one with $df_e = N - 2$ degrees of freedom, and one with $df = N - 1$ degrees of freedom. Or something like:
$$ \hat{\sigma}_{\hat{\beta_{1}}}^{2}\sim\frac{\sigma_{y}^{2}(1-\rho^{2})\chi^{2}(df_{e})/df_{e}}{df\sigma_{X}^{2}\chi^{2}(df)/df}=\frac{\sigma_{y}^{2}(1-\rho^{2})}{df\sigma_{X}^{2}}\frac{\chi^{2}(df_{e})/df_{e}}{\chi^{2}(df)/df}=\frac{\sigma_{y}^{2}(1-\rho^{2})F(df_{e,}df)}{df\sigma_{X}^{2}},$$

which means that the sampling distribution of MOE is:

$$ \hat{MOE}\sim t_{.975}(N-2)\sqrt{\frac{\sigma_{y}^{2}(1-\rho^{2})F(N-2,N-1)}{(N-1)\sigma_{X}^{2}}}. \tag{2} $$

This last equation, that is (2), can be used to obtain quantiles of the sampling distribution of MOE, which enables us to determine assurance MOE, that is the value of MOE that under repeated sampling will not exceed a target value with a given probability. For instance, if we want to know the .80 quantile of estimates of MOE, that is, assurance is .80, we determine the .80 quantile of the (central) F-distribution with N - 2 and N - 1 degrees of freedom and fill in (2) to obtain a value of MOE that will not be exceeded in 80\% of replication experiments.

For instance, suppose $\sigma^2_Y = 1$, $\sigma^2_X = 1$, $\rho = .50$, $N = 100$, and assurance is .80, then according to (2), 80\% of estimated MOEs will not exceed the value given by:

vary = 1
varx = 1
rho = .5
N = 100 
dfe = N - 2
dfx - N - 1
assu = .80
t = qt(.975, dfe)
MOE.80 = t*sqrt(vary*(1 - rho^2)*qf(.80, dfe, dfx)/(dfx*varx))
MOE.80
## [1] 0.1880535

 

What does a quick simulation study tell us? 

A quick simulation study may be used to check whether this is at all accurate. And, yes, the estimated quantile from the simulation study is pretty close to what we would expect based on (2). If you run the code below, the estimate equals 0.1878628.

 
library(MASS)
set.seed(355)
m = c(0, 0)

#note: s below is the variance-covariance matrix. In this case,
#rho and the cov(y, x) have the same values
#otherwise: rho = cov(x, y)/sqrt(varY*VarX) (to be used in the 
#functions that calculate MOE)
#equivalently, cov(x, y) = rho*sqrt(varY*varX) (to be used
#in the specification of the variance-covariance matrix for 
#generating bivariate normal variates)

s = matrix(c(1, .5, .5, 1), 2, 2)
se <- rep(10000, 0)
for (i in 1:10000) {
theData <- mvrnorm(100, m, s)
mod <- lm(theData[,1] ~ theData[,2])
se[i] <- summary(mod)$coefficients[4]
}
MOE = qt(.975, 98)*se
quantile(MOE, .80)
##       80% 
## 0.1878628

 

Planning for precision



If we want to plan for precision we can do the following. We start by making a function that calculates the assurance quantile of the sampling distribution of MOE described in (2). Then we formulate a  squared cost function, which we will optimize for the sample sizeusing the optimize function in R.

Suppose we want to plan for a target MOE of .10 with 80% assurance.We may do the following.

vary = 1
varx = 1
rho = .5
assu = .80
tMOE = .10

MOE.assu = function(n, vary, varx, rho, assu) {
        varY.X = vary*(1 - rho^2)
        dfe = n - 2
        dfx = n - 1
        t = qt(.975, dfe)
        q.assu = qf(assu, dfe, dfx)
        MOE = t*sqrt(varY.X*q.assu/(dfx * varx))
        return(MOE)
}

cost = function(x, tMOE) { 
cost = (MOE.assu(x, vary=vary, varx=varx, rho=rho, assu=assu) 
- tMOE)^2
}

#note samplesize is at least 40, at most 5000. 
#note that since we already know that N = 100 is not enough
#in stead of 40 we might just as well set N = 100 at the lower
#limit of the interval
(samplesize = ceiling(optimize(cost, interval=c(40, 5000), 
tMOE = tMOE)$minimum))
## [1] 321
#check the result: 
MOE.assu(samplesize, vary, varx, rho, assu)
## [1] 0.09984381

Let's simulate with the proposed sample size


Let's check it with a simulation study. The value of estimated .80 of estimates of MOE is 0.1007269 (if you run the below code with random seed 335), which is pretty close to what we would expect based on (2).

set.seed(355)
m = c(0, 0)

#note: s below is the variance-covariance matrix. In this case,
#rho and the cov(y, x) have the same values
#otherwise: rho = cov(x, y)/sqrt(varY*VarX) (to be used in the 
#functions that calculate MOE)
#equivalently, cov(x, y) = rho*sqrt(varY*varX) (to be used
#in the specification of the variance-covariance matrix for 
#generating bivariate normal variates)

s = matrix(c(1, .5, .5, 1), 2, 2)
se <- rep(10000, 0)
samplesize = 321
for (i in 1:10000) {
theData <- mvrnorm(samplesize, m, s)
mod <- lm(theData[,1] ~ theData[,2])
se[i] <- summary(mod)$coefficients[4]
}
MOE = qt(.975, 98)*se
quantile(MOE, .80)
##       80% 
## 0.1007269

 

References


Cumming, G. (2012). Understanding the New Statistics. Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge
Cumming, G., & Calin-Jageman, R. (2017). Introduction to the New Statistics: Estimation, Open Science, and Beyond. New York: Routledge.
Wilcox, R. (2017). Understanding and Applying Basic Statistical Methods using R. Hoboken, New Jersey: John Wiley and Sons.