This morning Jason Mitchell self-published an interesting essay espousing his views on why replication attempts are essentially worthless. At first I was merely interested by the fact that what would obviously become a topic of heated debate was self-published, rather than going through the long slog of a traditional academic medium. Score one for self publication, I suppose. Jason’s argument is essentially that null results don’t yield anything of value and that we should be improving the way science is conducted and reported rather than publicising our nulls. I found particularly interesting his short example list of things that he sees as critical to experimental results which nevertheless go unreported:
These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session. Someone without full possession of such know-how—perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specifically—could well be expected to bungle one or more of these important, yet unstated, experimental details.
While I don’t agree with the overall logic or conclusion of Jason’s argument (I particularly like Chris Said’s Bayesian response), I do think it raises some important or at least interesting points for discussion. For example, I agree that there is loads of potentially important stuff that goes on in the lab, particularly with human subjects and large scanners, that isn’t reported. I’m not sure to what extent that stuff can or should be reported, and I think that’s one of the interesting and under-examined topics in the larger debate. I tend to lean towards the stance that we should report just about anything we can – but of course publication pressures and tacit norms means most of it won’t be published. And probably at least some of it doesn’t need to be? But which things exactly? And how do we go about reporting stuff like how we respond to random participant questions regarding our hypothesis?
To find out, I’d love to see a list of things you can’t or don’t regularly report using the #methodswedontreport hashtag. Quite a few are starting to show up- most are funny or outright snarky (as seems to be the general mood of the response to Jason’s post), but I think a few are pretty common lab occurrences and are even though provoking in terms of their potentially serious experimental side-effects. Surely we don’t want to report all of these ‘tacit’ skills in our burgeoning method sections; the question is which ones need to be reported, and why are they important in the first place?
[…] have tweeted/blogged (EDIT: additional posts from Neuroskeptic, Drugmonkey, Jan Moren, Chris Said, Micah Allen) about a recent essay by Jason Mitchell, Professor of Psychology at Harvard, titled On the […]
I would imagine that many of the most important tacit methods are just that, tacit, and not open to self-report. Researchers do something in a certain way, without knowing it. Like riding a bike.
I suspect that a lot of the #methodswedontreport are actually just superstitions. The real magic is unconscious.
[…] effect) has been flying around the psychology blogosphere. Neuroskeptic, Neuropolarbear, Neuroconscience, Drug Monkey, True Brain, and Richard Tomasett have already weighed in with great insight, but I […]
[…] Another Harvard professor has made a poor argument against replication efforts, writes Neuroskeptic. The piece also gave rise to a Twitter hashtag, #MethodsWeDontReport. […]
[…] Micah Allen forked out a territory in Mitchell’s letter where he explains that in psychological experiments, vicious sum of a methodology used mostly go unreported. This isn’t since researchers are perplexing to censor anything; it’s because those vicious aspects are ‘tacit knowledge’. They are techniques and ways of traffic with participants that, apparently, we learn over a course of apropos a learned experimenter. They contain a set of skills that you can’t teach, and can’t explain in difference on a page. “Someone though full possession of such know-how—perhaps since he is globally incompetent, or new to science, or even only new to neuroimaging specifically—could good be expected to wreck one or some-more of these important, nonetheless unstated, initial details”, Mitchell argues. […]
[…] tacit knowledge without which a replication is likely to fail. There will doubtless always be methods we don’t report. Mitchell uses the example that we don’t report that we instruct participants in neuroimaging […]
[…] get them as clean as you can before you pour in your hypothesis. Perhaps a lot of this falls under methods we don’t report. I’m all for reducing this. Methods sections frequently lack necessary detail. But to some […]