It's always great to see new analytic tools being used to describe everyday activities. Shiny Apps are a great way of allowing readers to interact with data and analysis. Recently, the co-editors of Biostatistics, Jeff Leek and Dimitris Rizopoulos wrote a Shiny app that lets you examine the recent review times for manuscripts.. This way, you can see the historical “review survival time” data to get an idea of how long your manuscript will take to review. Very cool idea!
This rarely happens (actually, this specific type of thing has actually never happened), but the American Statistical Association formed a committee and published a statement on P-Values.
Basically, P-Values have come under attack in recent years and many scattered discussions took place debating a few aspects of them and current practice. The ASA decided it would be helpful to centralize and organize thoughts a little and explain the most common pitfalls in current P-Value mentalities, since some folks haven’t yet fully understood these issues, and some folks have over-reacted to them.The ASA boiled their thoughts down to six principles:1- P-values can indicate how incompatible the data are with a specified statistical model.
2- P-values do not measure the probability that the studied hypothesis is true, or the
probability that the data were produced by random chance alone.
3- Scientific conclusions and business or policy decisions should not be based only on
whether a p-value passes a specific threshold.
4- Proper inference requires full reporting and transparency.
5- A p-value, or statistical significance, does not measure the size of an effect or the
importance of a result.
6- By itself, a p-value does not provide a good measure of evidence regarding a model or
hypothesis.The statement is aimed towards, and should be read in more detail, by anyone involved in research today. I share the opinion that p-values are like cars. Very useful, but you really shouldn’t use one without a license.