The journal

*Basic and Applied Social Psychology*(*BASP*) has taken a resolute and bold step. A recent editorial announces that it has banned the reporting of inferential statistics.*F*-values,*t*-values,*p*-values and the like have all been declared personae non gratae. And so have confidence intervals. Bayes factors are not exactly banned but aren’t welcomed with open arms either; they are eyed with suspicion, like a mysterious traveler in a tavern.
There is a vigorous debate in the scientific literature and
in the social media about the pros and cons of Null Hypothesis Significance
Testing (NHST), confidence intervals, and Bayesian statistics (making researchers in some frontier towns quite nervous). The editors at

*BASP*have seen enough of this debate and have decided to do away with inferential statistics altogether. Sure, you're allowed to submit a manuscript that’s loaded with*p*-values and statements about significance or the lack thereof, but they will be rigorously removed, like lice from a schoolchild’s head.
The question is whether we can live with what remains. Can
we really conduct science without summary statements? Because what does the journal offer in their
place? It requires strong descriptive statistics, distributional information, and more power. These are all good things but we need to have a way to summarize our results, not just
because so we can comprehend and interpret them better ourselves and because we need to communicate them but also because we need to make decisions based on them as researchers, reviewers, editors, and users. Effect sizes are not banned and so will provide summary information that will be used to answer questions like:

--what will the next experiment
be?

--do the findings support the hypothesis?

--has or hasn’t the finding been
replicated?

--can I cite finding X as
support for theory Y?*

As to that last question, you can hardly cite a result
saying

*This finding supports or does not support the hypothesis but here are the descriptives.*The reader will want more in the way of a statistical argument or an intersubjective criterion to decide one way or the other. I have no idea how researchers, reviewers, and editors are going to cope with the new freedoms (from inferential statistics) and constraints (from not being able to use inferential statistics). But that’s actually what I like about the*BASP's*ban. It gives rise to a very interesting real-world experiment in meta-science.Sneaky Bayes |

There are a lot of unknowns at this point. Can we really live without inferential statistics? Will Bayes sneak in through the half-open door and occupy the premises? Will no one dare to submit to the journal? Will authors balk at having their manuscripts shorn of inferential statistics? Will the interactions among authors, reviewers, and editors yield novel and promising ways of interpreting and communicating scientific results? Will the editors in a few years be

*BASPing*in the glory of their radical decision? And how will we measure the success of the ban on inferential statistics? The wrong way to go about this would be to see whether the policy will be adopted by other journals or whether or not the impact factor of the journal rises. So how will we determine whether the ban will improve our science?
Questions, questions. But this is why we conduct experiments and this is why

*BASP's*brave*decision should be given the benefit of the doubt.*Footnotes

I thank Samantha Bouwmeester and Anita Eerland for feedback on a previous version and Dermot Lynott for the Strider picture.

*