Intuitive as Bayesian conceptualizations seem, a fascinating twist to this story is that the Bayesian nature of science is implicit. Scientists do not literally quantify their priors and plug the relevant numbers into Bayes’ equation – you are extremely unlikely to find this in a research article. The influence of priors is manifest only in the introductory and concluding discussion sections of research articles, in which researchers rationalize why they have performed this particular query (because it had a high prior) and interprets the query result through the lens of the pre-existing belief system.
Instead, science in general and experimental psychology in particular, is dominated by a different framework, called “null-hypothesis significance testing” or “the Neyman-Pearson approach”. To undergraduate students it is often presented as the only approach – as a single, objective structure – when actually it is heavily criticized within statistics, and only one approach among many. But Bayes and Neyman-Pearson are not necessarily in conflict – they are internally coherent, but conceptualize probability differently to serve different purposes. Nor should we hasten to conclude that both types of computations cannot occur alongside each other in the brain. Neyman-Pearson too has been used for theories of the mind, like Harold Kelley’s causal attribution theory and signal detection theory. And just like in science, the brain is capable of one-trial learning without sampling, so we should be wary of presenting either as some grand general-purpose mechanism.
There is meanwhile a third approach that is in minority but gaining popularity, called “likelihood analysis”. We will now consider the nuts and bolts of these procedures, but to do so, we will first need a good understanding of probability distributions.
In his book “Rationality for Mortals”, psychologist Gerd Gigerenzer once compared the three approaches to the three Freudian selves in unconscious conflict. The Bayesian approach corresponds to the instinctual Id, who longs for an epistemic interpretation and wishes to consider evidence in terms of the hypothesis-probabilities. The likelihood (or “Fisher”) approach corresponds to the pragmatic Ego, which, in order to get papers published, ritualistically applies an internally incoherent hybrid-approach that ignores beta and power-calculations and determines samples by some rule of thumb. Finally, the Neyman-Pearson approach corresponds to the purist Superego, which conscientiously sets alpha and beta in advance, reports p-values as p<0.05 rather than p=0.0078 and is aware of that it does not reflect degree of confidence, but rather a decision-supporting quality control.
The most important conceptual difference between Bayes/likelihood and Neyman-Pearson is that the former espouse “the likelihood principle” – the likelihood contains all the information required to update a belief. It shouldn’t matter how the researcher mentally groups tests together, or what sample size he plans. Stopping rule, multiple testings and timing of explanation don’t matter, and credibility or likelihood intervals do not need to modulated by them. The likelihood interval will cluster around the true value as data is gathered, regardless of the stopping rule. However, standards of good experimental practice should still apply. Suppose, for example, your Bayesian brain hypothesized “It is dark in the room” and sample information only when the eyes are closed. Due to a failure to randomize sampling and differentiate between the two hypotheses “Eyelids down” and “Dark outside”, it is likely to lead to the wrong beliefs.
Importantly, the approaches sometimes lead to very different results. As shown below, the Neyman-Pearson approach could accept the null hypothesis in cases where the evidence clearly supports the alternative hypothesis.