Pagina's

Saturday, 29 April 2017

Planning for precision with samples of participants and items

Many experiments involve the (quasi-)random selection of both participants and items. Westfall et al. (2014) provide a Shiny-app for power-calculations for five different experimental designs with selections of participants and items. Here I want to present my own Shiny-app for planning for precision of contrast estimates (for the comparison of up to four groups) in these experimental designs.  The app can be found here: https://gmulder.shinyapps.io/precision/

(Note: I have taken the code of Westfall's app and added code or modified existing code to get precision estimates in stead of power; so, without Westfall's app, my own modified version would never have existed).

The plan for this post is as follows. I will present the general theoretical background (mixed model ANOVA combined with ideas from Generalizability Theory) by considering comparing three groups in a counter balanced design.
Note 1: This post uses mathjax, so it's probably unreadable on mobile devices. Note: a (tidied up) version (pdf) of this post can be downloaded here: download the pdf
Note 2: For simulation studies testing the procedure go here: https://the-small-s-scientist.blogspot.nl/2017/05/planning-for-precision-simulation.html
Note 3: I use the terms stimulus and item interchangeably; have to correct this to make things more readable and comparable to Westfall et al. (2014).
Note 4: If you do not like the technical details you can skip to an illustration of the app at the end of the post.

 

The general idea


The focus of planning for precision is to try to minimize the half-width of a 95%-confidence interval for a comparison of means (in our case). Following Cumming's (2012) terminology I will call this half-width the Margin of Error (MOE). The actual purpose of the app is to find required sample sizes for participants and items that have a high probability ('assurance') of obtaining a MOE of some pre-specified value.  

Friday, 21 April 2017

What is NHST, anyway?

I am not a fan of NHST (Null Hypothesis Significance Testing). Or maybe I should say, I am no longer a fan. I used to believe that rejecting null-hypotheses of zero differences based on the  p-value was the proper way of gathering evidence for my substantive hypotheses. And the evidential nature of the p-value seemed so obvious to me, that I frequently got angry when encountering what I believed were incorrect p-values, reasoning that if the p-value is incorrect, so must be the evidence in support of the substantive hypothesis. 

For this reason, I refused to use the significance tests that were most frequently used in my field, i.e. performing a by-subjects analysis and a by-item analysis and concluding the existence of an effect if both are significant,  because the by-subjects analyses in particular regularly leads to p-values that are too low, which leads to believing you have evidence while you really don't.  And so I spent a huge amount of time, coming from almost no statistical background - I followed no more than a few introductory statistics courses - , mastering mixed model ANOVA and hierarchical linear modelling (up to a reasonable degree; i.e. being able to get p-values for several experimental designs).  Because these techniques, so I believed, gave me correct p-values. At the moment, this all seems rather silly to me. 

I still have some NHST unlearning to do. For example, I frequently catch myself looking at a 95% confidence interval to see whether zero is inside or outside the interval, and actually feeling happy when zero lies outside it (this happens when the result is statistically significant). Apparently, traces of NHST are strongly embedded in my thinking. I still have to tell myself not to be silly, so to say. 

One reason for writing this blog is to sharpen my thinking about NHST and trying to figure out new and comprehensible ways of explaining to students and researchers why they should be vary careful in considering NHST as the sine qua non of research. Of course,  if you really want to make your reasoning clear, one of the first things you should do is define the concepts you're reasoning about. The purpose of this post is therefore to make clear what my "definition" of NHST is. 

My view of NHST  is very much based on how Gigerenzer et al. (1989) describe it: 

"Fisher's theory of significance testing, which was historically first, was merged with concepts from the Neyman-Pearson theory and taught as "statistics" per se. We call this compromise the "hybrid theory" of statistical inference, and it goes without saying the neither Fisher nor Neyman and Pearson would have looked with favor on this offspring of their forced marriage." (p. 123, italics in original). 

Actually, Fisher's significance testing and Neyman-Pearson's hypothesis testing are fundamentally incompatible (I will come back to this later), but almost no texts explaining statistics to psychologists "presented Neyman and Pearson's theory as an alternative to Fisher's, still less as a competing theory. The great mass of texts tried to fuse the controversial ideas into some hybrid statistical theory, as described in section 3.4. Of course, this meant doing the impossible." (p. 219, italics in original). 

So, NHST is an impossible, as in logically incoherent, "statistical theory", because it (con)fuses concepts from incompatible statistical theories. If this is true, which I think it is, doing science with a small s, which involves logical thinking, disqualifies NHST as a main means of statistical inference. But let me write a little bit more about Fisher's ideas and those of Neyman and Pearson, to explain the illogic of NHST. 

Thursday, 6 April 2017

Lazy Larry's argument and the Mechanical Mind's reply

Meet Lazy Larry, the non-critically thinking reviewer of your latest experimental result. (The story also applies to Lazy Larry's reviews of non-experimental results). Lazy Larry does not believe your results signify anything "real". Never mind your excellent experimental procedures and controls, and forget about your highly reliable instruments, Lazy Larry refuses to think about your results and by default dismisses them as "due to chance".

"Due to chance" is simply a short-hand description of, say, your experimental group seems to outperform the control group on average, but that is not due to your experimental manipulation, but due to sampling error: you just happened to have randomly assigned better performing participants to the experimental group than to the control group.

Enter the Mechanical Mind. Its sole purpose is to persuade Lazy Larry that the results are not "due to chance". Mechanical Mind has learned that Lazy Larry is quite easily persuaded (remember that Larry doesn't think), so Mechanical Mind always does the following:

  1. He pretends to have randomly assigned a random sample of participants to either the experimental or the control group. (Note the pretending is about having drawn a random sample; but since we assume an excellent experiment, we may just as well assume that the sample is in fact a random sample, but the Mechanical Mind always assumes a random sample, as part of its test procedure, even if the sample is a convenience sample). 
  2. He formulates a null-hypothesis that the mean population values are exactly equal to the millionth or more decimal. 
  3. He calculates a test statistic, say a t-value. 
  4. He determines a p-value:  the probability of obtaining a t-value as large as or larger than the one obtained in the experiment, under the pretense of repeated sampling from the population, assuming the null-hypothesis is true. 
  5. He rejects the null-hypothesis if the p-value is smaller than .05 and calls that result significant. 
  6. He concludes that the results are not "due to chance" and automatically takes that conclusion to mean that the effect of the experimental manipulation is "real."
Being a non-thinker, Lazy Larry immediately agrees: if the p-value is smaller than .05, the effect is not "due to chance", it is a real effect.

Enter a Small s Scientist. The Small s Scientist notices something peculiar. She notices that both Lazy Larry and the Mechanical Mind do not really think, which strikes her as odd. Doesn't science involve thinking? Here we have Larry who has only one standard argument against any experimental result, and here we have the Mechanical Mind who has only one standard reply: a mindlessly performed ritual of churning out a p-value. Yes, it may shut up Lazy Larry, if the p-value happens to be smaller than .05, but the Small s Scientist is not lazy, she really thinks about experimental results.

She wonders about Lazy Larry's argument. We have an experiment with excellent experimental procedures and controls, with highly reliable instruments, so although sampling error always has some role to play, it doesn't immediately come to mind as a plausible explanation for the obtained effect. Again, simply assuming this by-default, is the mark of an unthinking mind.

She thinks about the Mechanical Minds procedure.  The Mechanical Mind assumes that the mean population values are completely equal up to the millionth decimal or more. Why does the Mechanical Mind assume this?  Is it really plausible that it is true? To the millionth decimal? Furthermore, she realizes that she has just read the introduction section of your paper in which you very intelligently and convincingly argue that your independent variable must have a major role to play in explaining the variation in the dependent variable. But now we have to assume that the population means are exactly the same? Reading your introduction section makes this assumption highly implausible.

She recognizes that the Mechanical Mind made you do a t-test. But is the t-test appropriate in the particular circumstances of your experiment? The assumptions of the test are that you have sampled from a normally distributed population with equal variance. Do these assumptions apply? The Mechanical Mind doesn't seem to be bothered much about these assumptions at all. How could it? It cannot think.

She notices the definition of the p-value. The probability of obtaining a value of, in this case, the t-statistic as large as or larger than the one obtained in the experiment, assuming repeated random sampling from a population in which the null-hypothesis is true. But wait a minute, now we are assigning a probability statement to an individual event (i.e. the obtained t-statistic). Can we do that? Doesn't a frequentist conception of probability rule out assigning probabilities to single events? Isn't the frequentist view of probability restricted to the possibly infinite collection of single events and the frequency of occurrence of the possible values of the dependent variable? Is it logically defensible to assign probabilities to single events and at the same time make use of a frequentist conception of probability? It strikes the Small s Scientist as silly to think it is.

She understands why the Mechanical Mind focuses on the probability of obtaining results (under repeated sampling from the null-population) as extreme as or extremer than the one obtained. It is simply that any obtained result has a very low probability (if not 0; e.g. if the dependent variable is continuous), no matter the hypothesis.  So, the probability of a single obtained t-statistic is so low to be inconsistent with every hypothesis.  But why, she wonders, do we need to consider all the results that were not obtained (i.e. the more extreme results) in determining whether a "due to chance" explanation has some plausibility (remember that the "due to chance" argument does not seem to be very plausible to begin with)? Why, she wonders, do we not restrict ourselves to the data that were actually obtained?

The Small s Scientist gets a little frustrated when thinking about why a null-hypothesis can be rejected if p < .05 and not when p > .05. What is the scientific justification of using this criterion? She has read a lot about statistics but never found a justification of using .05, apart from Fisher claiming that .05 is convenient, which is not really a justification. It doesn't seem to be very scientific to justify a critical value simply by saying that Fisher said so. Of course, the Small s Scientist knows about decision procedures a la Neyman and Pearson's hypothesis testing in which setting α can be done on a rational basis by considering loss functions, but considering loss functions is not part of the Mechanical Mind's procedure. Besides, is the purpose of the Mechanical Mind's procedure not to counter the "due to chance" explanation, by providing evidence against it, in stead of deciding whether or not the result is due to chance? In any case, the 5% criterion is an unjustified criterion, and using 5% by-default is, let's repeat it again, the mark of an unthinking mind.

The final part of the Mechanical Mind's procedure strikes the Small s Scientist as embarrassingly silly. Here we see a major logical error. The Mechanical Mind assumes, and Lazy Larry seems to believe, that a low p-value (according to an unjustified convention of .05) entails that results are not "due to chance" whereas a high p-value means that the results are "due to chance", and therefore not real. Maybe it should not surprise us that unthinking minds, mechanical, lazy, or both, show signs of illogical reasoning, but it seems to the Small s Scientist that illogical thinking has no part to play in doing science.

The logical error is the error of the transposed conditional. The conditional is: If the null-hypothesis (and all other assumptions, including repeated random sampling) is/are true, the probability of obtaining a t-statistic as large as or larger than the one obtained in the experiment is p. That is, if all of the obtained t-statistics in repeated samples are "due to chance", the probability of obtaining one as large as or larger than the one obtained in the experiment equals p.  It's incorrect transpose is: if the p-value is small, than the null-hypothesis is not true (i.e. the results are not "due to chance").  Which is very close to: If the null-hypothesis is true, these results (or more extreme results) do not happen very often" to  "If these results happen, the null-hypothesis is not true".  More abstractly the Mechanical Mind goes from "If H, than probably not R" to "If R, than probably not H", where R stands for results and H for the null-hypothesis.".

To sum up. The Small s Scientist believes that science involves thinking. The Mechanical Mind's procedure is an unthinking reply to Lazy Larry's standard argument that experimental results are "due to chance". The Small s Scientist tries to think beyond that standard argument and finds many troubling aspects of the Mechanical Mind's procedure. Here are the main points.

  1. The plausibility of the null-hypothesis of exactly equal population  means can not be taken for granted. Like every hypothesis it requires justification.
  2. The choice for a test statistic can not be automatically determined. Like every methodological choice it requires justification. 
  3. The interpretation of the p-value as a measure of evidence against the "due to chance" argument requires assigning a probability statement to a single event. This is not possible from a frequentist conception of probability. So, doing so, and simultaneously holding  a frequentist conception of probability means that the procedure is logically inconsistent. The Small s Scientist does not like logical inconsistency in scientific work. 
  4.  The p-value as a measure of evidence, includes "evidence" not actually obtained. How can a "due to chance" explanation (as implausible as it often is) be discredited on the basis of evidence that was not obtained? 
  5. The use of a criterion of .05 is unjustified, so even if we allow logical inconsistency in the interpretation of the p-value (i.e. assigning a probability statement to a single event), which a Small s Scientist does not, we still need a scientific justification of that criterion. The Mechanical Mind's procedure does not provide such a justification. 
  6. A large p-value does not entail that the results "are due to chance".  A p-value cannot be used to distinguish "chance" results from "non-chance" results. The underlying reasoning is invalid, and a Small s Scientist does not like invalid reasoning in scientific work. 



Tuesday, 4 April 2017

Type I error probability does not destroy the evidence in your data

Have you heard about that experimental psychologist? He decided that his participants did not exist, because the probability of selecting them, assuming they exist, was very small indeed (p < .001). Fortunately, his colleagues were quick to reply that he was mistaken. He should decide that they do exist, because the probability of selecting them, assuming they do not exist, is very remote (p < .001). Yes, even unfunny jokes can be telling about the silliness of significance testing.

But sometimes the silliness is more subtle, for instance in a recent blog post by Daniel Lakens, the 20% Statistician with the title "Why Type I errors are more important than Type 2 errors (if you care about evidence)." The logic of his post is so confused, that I really do not know where to begin. So, I will aim at his main conclusion that type I error inflation quickly destroys the evidence in your data.

(Note: this post uses mathjax and I've found out that this does not really work well on a (well, my) mobile device. It's pretty much unreadable).

Lakens seems to believe that the long term error probabilities associated with decision procedures, has something to do with the actual evidence in your data. What he basically does is define evidence as the ratio of power to size (i.e. the probability of a type I error), it's basically a form of the positive likelihood ratio $$PLR = \frac{1 - \beta}{\alpha},$$ which makes it plainly obvious that manipulating $\alpha$ (for instance by multiplying it with some constant c) influences the PLR more than manipulating $\beta$ by the same amount.  So, his definition of  "evidence" makes part of his conclusion true, by definition:  $\alpha$ has more influence on the PLR than $\beta$,  But it is silly to reason on the basis of this that the type I error rate destroys the evidence in your data.

The point is that $\alpha$  and $\beta$ (or the probabilities of type I errors and type II errors) have nothing to say about the actual evidence in your data. To be sure, if you commit one of these errors, it is the data (in NHST combined with arbitrary i,e, unjustified cut-offs) that lead you to these errors. Thus, even $\alpha = .01$ and $\beta = .01$, do not guarantee that actual data lead to a correct decision.

Part of the problem is that Lakens confuses evidence and decisions, which is a very common confusion in NHST practice. But, deciding to reject a null-hypothesis, is not the same as having evidence against it (there is this thing called type I error). It seems that NHST-ers and NHST apologists find this very very hard to understand. As my grandmother used to say: deciding that something is true, does not make it true

I will try to make plausible that decisions are not evidence (see also my previous post here). This should be enough to show you that error probabilities associated with the decision procedure tells you nothing about the actual evidence in your data. In other words, this should be enough to convince you that Type 1 error rate inflation does not destroy the evidence in your data, contrary to the 20% Statistician's conclusion.

Let us consider whether the frequency of correct (or false) decisions is related to the evidence in the data. Suppose I tell you that I have a Baloney Detection Kit (based for example on the baloney detection kit at skeptic.com) and suppose I tell you that according to my Baloney Detection Kit the 20% Statistician's post is, well, Baloney. Indeed, the quantitative measure (amount of Baloneyness) I use to make the decision is well above the critical value. I am pretty confident about my decision to categorize the post as Baloney as well, because my decision procedure rarely leads to incorrect decisions. The probability that I decide that something is Baloney when it is not is only $\alpha = .01$ and the probability that I decide that something is not-Baloney when it is in fact Baloney is only 1% as well ($\beta = .01$).

Now, the 20% Statistician's conclusion states that manipulating $\alpha$, for instance by setting $\alpha = .10$ destroys the evidence in my data. Let's see. The evidence in my data is of course the amount of Baloneyness of the post. (Suppose my evidence is that the post contains 8 dubious claims). How does setting $\alpha$ have any influence on the amount of Baloneyness? The only thing setting $\alpha$ does is influence the frequency of incorrect decisions to call something Baloney when it is not. No matter what value of $\alpha$ (or $\beta$, for that matter) we use, the amount of Baloneyness in this particular post (i.e. the evidence in the data) is 8 dubious claims.

To be sure, if you tell the 20% Statistician that his post is Baloney, he will almost certainly not ask you how many times you are right and wrong on the long run (characteristics of the decision procedure), he will want to see your evidence. Likewise, he will probably not argue that your decision procedure is inadequate for the task at hand (maybe it is applicable to science only and not to non-scientific blog posts), but he will argue about the evidence (maybe by simply deciding (!) that what you are saying is wrong; or by claiming that the post does not contain 8 dubious claims, but only 7).

The point is, of course, this: the long term error probabilities $\alpha$ and $\beta$ associated with the decision procedure, have no influence on the actual evidence in your data.  The conclusion of the 20% Statistician is simply wrong. Type I error inflation does not destroy the evidence in your data, nor does type II error inflation.