Discussion about this post

User's avatar
blake harper's avatar

In their follow-up paper where they looked at 12 randomly selected recent publications, they found no p-hacking and an 83% replication rate. But obviously that's a very small sample, and they did find some other worrying methodological problems that are less widely discussed.

Expand full comment
Arielle Selya PhD's avatar

Great article. I think the change to require preregistration (or at least normalize it as the standard but allow justified exceptions) has to come from journals (through submission requirements) and universities (in annual performance evaluations).

Preregistration should reasonably protect against things like p-value hacking, but I realized recently to my dismay that they can still leave lots of room for other biases (e.g. confirmation bias of pet theories) unless the researcher is really diligent about designing their preregistration to test competing theories against each other.

I wrote about an example recently in my field, but to put it in general terms, the preregistration simply says "we expect to find an association between A and B in observational data" and claim support for their pet causal theory, while ignoring competing explanations (including reverse-causal ones). This is almost a worse situation because authors can claim their preregistered hypotheses were supported, when the results really don't advance our understanding in a meaningful way.

https://arielleselyaphd.substack.com/p/pre-registration-of-research-plans?r=45yctx

Edit to add: I still support preregistration, of course - it's a big step in the right direction. We just have to be mindful of its limitations and what other good practices would help (e.g., adversarial collaborations which would be better at testing competing theories).

Expand full comment
4 more comments...

No posts

Ready for more?