Is Frontiers in Trouble?

Lately it seems like the rising tide is going against Frontiers. Originally hailed as a revolutionary open-access publishing model, the publishing group has been subject to intense criticism in recent years. Recent issues include being placed on Beall’s controversial ‘predatory publisher list‘, multiple high profile disputes at the editorial level, and controversy over HIV and vaccine denialist articles published in the journal seemingly without peer review. As a proud author of two Frontiers articles and former frequent reviewer, these issues compounded with a general poor perception of the journal recently led me to stop all publication activities at Frontiers outlets. Although the official response from Frontiers to these issues has been mixed, yesterday a mass-email from a section editor caught my eye:

Dear Review Editors, Dear friends and colleagues,

As some of you may know, Prof. Philippe Schyns recently stepped down from his role as Specialty Chief Editor in Frontiersin Perception Science, and I have been given the honor and responsibility of succeeding him into this function. I wish to extend to him my thanks and appreciation for the hard work he has put in building this journal from the ground up. I will strive to continue his work and maintain Frontiers in Perception Science as one of the primary journals of the field. This task cannot be achieved without the support of a dynamic team of Associate Editors, Review Editors and Reviewers, and I am grateful for all your past, and hopefully future efforts in promoting the journal.

It am aware that many scientists in our community have grown disappointed or even defiant of the Frontiers publishing model in general, and Frontiers in Perception Science is no exception here. Among the foremost concerns are the initial annoyance and ensuing disinterest produced by the automated editor/reviewer invitation system and its spam-like messages, the apparent difficulty in rejecting inappropriate manuscripts, and (perhaps as a corollary), the poor reputation of the journal, a journal to which many authors still hesitate before submitting their work. I have experienced these troubles myself, and it was only after being thoroughly reassured by the Editorial office on most of these counts that I accepted to get involved as Specialty Chief Editor. Frontiers is revising their system, which will now leave more time for Associate Editors to mandate Review Editors before sending out automated invitations. When they occur, automated RE invitations will be targeted to the most relevant people (based on keyword descriptors), rather than broadcast to the entire board. This implies that it is very important for each of you to spend a few minutes editing the Expertise keywords on your Loop profile page. Most of these keywords were automatically collected within your publications, and they may not reflect your true area of expertise. Inappropriate expertise keywords are one of the main reasons why you receive inappropriate reviewing invitations! In the new Frontiers system, article rejection options will be made more visible to the handling Associate Editor. Although my explicit approval is still required for any manuscript rejection, I personally vow to stand behind all Associate Editors who will be compelled to reject poor-quality submissions. (While perceived impact cannot be used as a rejection criterion, poor research or writing quality and objective errors in design, analysis or interpretation can and should be used as valid causes for rejection). I hope that these measures will help limit the demands on the reviewers’ time, and contribute to advancing the standards and reputation of Frontiers in Perception Science. Each of you can also play a part in this effort by continuing to review articles that fall into your area of expertise, and by submitting your own work to the journal.

I look forward to working with all of you towards establishing Frontiers in Perception Science as a high-standard journal for our community.

It seems Frontiers is indeed aware of the problems and is hoping to bring back wary reviewers and authors. But is it too little too late? Discussing the problems at Frontiers is often met with severe criticism or outright dismissal by proponents of the OA publishing system, but I felt these neglected a wider negative perception of the publisher that has steadily grown over the past 5 years. To get a better handle on this I asked my twitter followers what they thought. 152 persons responded as follows:

As some of you requested control questions, here are a few for comparison:

 

That is a stark difference between the two top open access journals – whereas only 19% said there was no problem at Frontiers, a full 50% say there is no problem at PLOS ONE. I think we can see that even accounting for general science skepticism, opinions of Frontiers are particularly negative.

Sam Schwarzkopf also lent some additional data, comparing the whole field of major open access outlets – Frontiers again comes out poorly, although strangely so does F1000:

These data confirm what I had already feared: public perception among scientists (insofar as we can infer anything from such a poll) is lukewarm at best. Frontiers has a serious perception problem. Only 19% of 121 respondents were willing to outright say there was no problem at the journal. A full 45% said there was a serious problem, and 36% were unsure. Of course to fully evaluate these numbers, we’d like to know the baserate of similiar responses for other journals, but I cannot imagine any Frontiers author, reviewer, or editor feeling joy at these numbers – I certainly do not. Furthermore they reflect a widespread negativity I hear frequently from colleagues across the UK and Denmark.

What underlies this negative perception? As many proponents point out, Frontiers has been actually quite diligent at responding to user complaints. Controversial papers have been put immediately under review, overly spammy-review invitations and special issue invites largely ceased, and so on. I would argue the issue is not any one single mistake on the part of Frontiers leadership, but a growing history of errors contributing to a perception that the journal is following a profit-led ‘publish anything’ model. At times the journal feels totally automated, within little human care given to publishing and extremely high fees. What are some of the specific complaints I regularly hear from colleagues?

  • Spammy special issue invites. An older issue, but at Frontier’s inception many authors were inundated with constant invites to special issues, many of which were only tangentially related to author’s specialties.
  • Spammy review invites. Colleagues who signed on to be ‘Review Editors’ (basically repeat reviewers) reported being hit with as many as 10 requests to review in a month, again many without relevance to their interest
  • Related to both of the above, a perception that special issues and articles are frequently reviewed by close colleagues with little oversight. Similiarly, many special issues were edited by junior researchers at the PhD level.
  • Endless review. I’ve heard numerous complaints that even fundamentally flawed or unpublishable papers are impossible or difficult to reject. Reviewers report going through multiple rounds of charitable review, finding the paper only gets worse and worse, only to be removed from the review by editors and the paper published without them.

Again, Frontiers has responded to each of these issues in various ways. For example, Frontiers originally defended the special issues, saying that they were intended to give junior researchers an outlet to publish their ideas. Fair enough, and the spam issues have largely ceased. Still, I would argue it is the build up and repetition of these issues that has made authors and readers wary of the journal. This coupled with the high fees and feeling of automation leads to a perception that the outlet is mostly junk. This is a shame as there are certainly many high-value articles in Frontiers outlets. Nevertheless, academics are extremely bloodshy, and negative press creates a vicious feedback loop. If researchers feel Frontiers is a low-quality, spam-generating publisher who relies on overly automated processes, they are unlikely to submit their best work or review there. The quality of both drops, and the cycle intensifies.

For my part, I don’t intend to return to Frontiers unless they begin publishing reviews. I think this would go a long way to stemming many of these issues and encourage authors to judge individual articles on their own merits.

What do you think? What can be done to stem the tide? Please add your own thoughts, and stories of positive or negative experiences at Frontiers, in the comments.

____

Edit:

A final comparison question

 

 

Some thoughts on writing ‘Bayes Glaze’ theoretical papers.

[This was a twitter navel-gazing thread someone ‘unrolled’. I was really surprised that it read basically like a blog post, so I thought why not post it here directly! I’ve made a few edits for readability. So consider this an experiment in micro-blogging ….]

In the past few years, I’ve started and stopped a paper on metacognition, self-inference, and expected precision about a dozen times. I just feel conflicted about the nature of these papers and want to make a very circumspect argument without too much hype. As many of you frequently note, we have way too many ‘Bayes glaze’ review papers in glam mags making a bunch of claims for which there is no clear relationship to data or actual computational mechanisms.

It has gotten so bad, I sometimes see papers or talks where it feels like they took totally unrelated concepts and plastered “prediction” or “prediction error” in random places. This is unfortunate, and it’s largely driven by the fact that these shallow reviews generate a bonkers amount of citations. It is a land rush to publish the same story over and over again just changing the topic labels, planting a flag in an area and then publishing some quasi-related empirical stuff. I know people are excited about predictive processing, and I totally share that. And there is really excellent theoretical work being done, and I guess flag planting in some cases is not totally indefensible for early career researchers. But there is also a lot of cynical stuff, and I worry that this speaks so much more loudly than the good, careful stuff. The danger here is that we’re going to cause a blowback and be ultimately seen as ‘cargo cult computationalists’, which will drag all of our research down both good and otherwise.

In the past my theoretical papers in this area have been super dense and frankly a bit confusing in some aspects. I just wanted to try and really, really do due-diligence and not overstate my case. But I do have some very specific theoretical proposals that I think are unique. I’m not sure why i’m sharing all this, but I think because it is always useful to remind people that we feel imposter syndrome and conflict at all career levels. And I want to try and be more transparent in my own thinking – I feel that the earlier I get feedback the better. And these papers have been living in my head like demons, simultaneously too ashamed to be written and jealous at everyone else getting on with their sexy high impact review papers.

Specifically, I have some fairly straightforward ideas about how interoception and neural gain (precision) inter-relate, and also have a model i’ve been working on for years about how metacognition relates to expected precision. If you’ve seen any of my recent talks, you get the gist of these ideas.

Now, I’m *really* going to force myself to finally write these. I don’t really care where they are published, it doesn’t need to be a glamour review journal (as many have suggested I should aim for). Although at my career stage, I guess that is the thing to do. I think I will probably preprint them on my blog, or at least muse openly about them here, although i’m not sure if this is a great idea for theoretical work.

Further, I will try and hold to three key promises:

  1. Keep it simple. One key hypothesis/proposal per paper. Nothing grandiose.
  2. Specific, falsifiable predictions about behavioral & neurophysiological phenomenon, with no (minimal?) hand-waving
  3. Consider alternative models/views – it really gets my goat when someone slaps ‘prediction error’ on their otherwise straightforward story and then acts like it’s the only game in town. ‘Predictive processing’ tells you almost *nothing* about specific computational architectures, neurobiological mechanisms, or general process theories. I’ve said this until i’m blue in the face: there can be many, many competing models of any phenomenon, all of which utilize prediction errors.

These papers *won’t* be explicitly computational – although we have that work under preparation as well – but will just try to make a single key point that I want to build on. If I achieve my other three aims, it should be reasonably straight-forward to build computational models from these papers.

That is the idea. Now I need to go lock myself in a cabin-in-the-woods for a few weeks and finally get these papers off my plate. Otherwise these Bayesian demons are just gonna keep screaming.

So, where to submit? Don’t say Frontiers…

Bon Voyage – Neuroconscience goes to Cambridge! A Retrospective and Thank You.

Today is a big day – I’m moving to Cambridge! After nearly five years of living in London, it’s finally time for me to move on to green pastures. It’s hard to believe really. I first came to London nearly fifteen years ago on a high school trip. It was love at first sight. I knew that somehow, someday, I would live here. As a Bermudian immigrant living in Florida, to me London was the centre of the world. The bustling big city, but unlike many in the US, one rich in a thousand years of history and culture. And although I didn’t know it then, my eventual career in neuroscience would draw me towards this great city like a moth to the flame.

I still remember like it was yesterday; my first internship in a neuroimaging lab at the University of Central Florida. Graduate students pouring over some strange software called “SPM” to produce colorful brain maps, accompanied by an arcane tome of the same name. Although I knew already then that I wanted to do cognitive neuroimaging, SPM and the eponymously named Functional Imaging Laboratory (FIL) were just names on a book at the time. Later however, when I joined the Interacting Minds Group at Aarhus University, SPM and the FIL were everywhere. Most of our PIs had undertook their formative training in the centre; we even had our own Friday project presentations modelled exactly on the original item. Every single desk had a copy of the SPM textbook. To me London, the FIL, and the brilliant people working there represented an unchallenged mecca of neuroimaging methods. I set my sights on a postdoc in Queen Square.

Yet, even halfway through my PhD, I wasn’t sure how or even if I’d manage to realize my dream. During my PhD I visited the Institute of Cognitive Neuroscience (ICN) and applied to several labs unsuccessfully. All the excitement and energy of the London Neuroscience scene seemed only there to tease me; as if to say an upstart boy from Alabama vis-à-vis Bermuda could only taste what was on offer, but never possess it. Finally, something broke; Geraint Rees, then ICN director, took an interest in my work on embodiment and perception, and invited me to apply for jaw-dropping 4-years of postdoc, based between the FIL and ICN. I was elated to apply, and even more so to get it. Breathlessly I told my friends and family back home – this was it, my big break in the big city. And I wasn’t just headed to London, but to that seemingly mythic centre from which so much of my interests – brain imaging, predictive processing, and clever experimental design– seemed to stem. As far as I was concerned, this was my big chance on Broadway, and I told anyone who would listen.

Of course, in the end reality is never quite the same as expectation. I’ll never quite forget my shock at my first day on the job. During my induction, I was giddy as Marcia Bennett led me around and the centre, explaining various protocol and introducing me to other fellows. But then she took me down to the basement office; a sterile, open plan room with more than twenty desks, no real windows, and at least 10 or more active researchers all working in close proximity. As she left me at my desk, I worried that perhaps I’d made a mistake. My offices in Denmark had been luxurious, often empty, and with a stunning view of the city. Now I was to sit in a basement, crammed besides what seemed then like a factory line of postdocs, every day for four years? On top of that, I quickly learned that commuting one hour every day on the tube is far from fun, once the novelty wears off. Nevertheless, I set to my work. And in time, as I adjusted to living in a big city and working in a big centre, I came to find that London, UCL, and Queen Square would capture my mind and my heart in ways I could have never imagined.

Now as I look back, it’s difficult to believe so much has come to pass. I’ve lived in my dream city for the past five years; I’ve gone to bohemian art shows, sipped with brewers at hipster beer festivals, visited ornate science societies, and grew a massive beard. I’ve ended up at crazy nightclubs at early hours; I’ve witnessed major developments in cognitive neuroscience, and I started a neuroscience sport climbing club. Like an extremophile sucking off some nutrient rich deep-sea volcanic vent, I’ve inhaled every bit of exciting new methods, ideas, and energy that comes through the amazing and overwhelming world that is London Neuroscience. I’ve made friends, and I’ve made mistakes. I’ve had major victories, and also major losses. And through it all, I’ve been supported by some of the most brilliant and helpful friends, colleagues, and mentors anyone could ever possibly ask for.

Of course, London is many things to many people, but perhaps it is home to the fewest of all. So many times I asked myself; am I a Londoner? Will this city ever be my home, or is it just one great side-show on the great journey of life. In the end, I still don’t know the answer. I’m a nomad, and I’ve lived in London for as long as I’ve lived anywhere else in the world. It will always be a part of me, and I will always be grateful for the gifts she has given me. I leave London a better person, and a better neuroscientist. And who knows… maybe someday I’ll return (… although hopefully at a higher pay-grade!).

Where do you go after living in your dream city and working at your dream job? To Cambridge! I’m on my way today to a new flat, a new job, a new mentor, and a new centre. If my mother could see me now, i’m sure she’d never believe her baby boy made it so far away from sweet home Alabama. And London won’t be far away; indeed, for some months I’ll be returning each week to carry on collaborations as I transition between jobs. And as Karl says; once you’ve been at the FIL, you never really leave. The FIL is many things, but most of all it is a family of people bonded by a desire to do really kick-ass cognitive neuroscience. So as I go on to Cambridge and whatever lies beyond, I know I will always carry with me my training, my friends, and my amazing colleagues.

And the future is bright indeed! This post is already too long; but let it suffice as a teaser when I say that my upcoming projects will be some of the most exciting work I have ever done. I’m leaving the FIL armed with a shiny new model of my own make and a suite of experimental questions that I am eager to answer. Together with Professor Paul Fletcher and the amazing colleagues of the new MRC/Wellcome Translational Research Facility and Cambridge Psychiatry Department, I will have access to unique patient groups not found anywhere else in the world, the latest methods in pharmacological neuroimaging, and a team of amazing collaborators accelerating our research. By applying the methods and models I’ve developed in London in this setting, I will have the chance to push our understanding of interoception, metacognition, and embodied self-inference to newfound heights. If London was my Broadway, then Cambridge shall be the launchpad for my world tour.

So stay tuned! If I learned one thing in the past half-decade, it’s that I need to return to blogging. Now that I’ve finally recaptured my voice, and am headed forward on another amazing research adventure, it’s more important than ever to communicate to you, my beloved hive mind, what exciting new developments are on the horizon. In the near future I will outline some of the exciting new directions my research will be taking at Cambridge, where we will use a variety of methods to probe brain-body interaction and it’s disruption in psychiatric and health-harming behaviours.

And now for some much needed thanks. Thanks to my beautiful and brilliant wife Francesca – whose incredible research inspires me daily – for her unwavering and unconditional love and support in all things. To my Grandmother who saved me so many times and is the inspiration for so much of my work. Thanks Mom – I wish you could see this. Thanks Dad for teaching me to work hard for my dreams and to always lead the way. Thanks to Corey and Marissa who are the best brother and sister anyone could ask for. To Jonathan and Alex, the two best men on the planet – without our Skype sessions i’d have never survived this long! To my amazing mentors – of whom I have been fortunate to have so many who have helped me so dearly; Shaun Gallagher, Andreas Roepstorff, Antoine Lutz, Uta & Chris Frith, Geraint Rees, Jonathan Smallwood, and Karl Friston. To my students Maria, Darya, Calum, and Thomas who did such an amazing job making our science a reality. To my awesome amazing friends, colleagues, & collaborators who challenge me, help me, and help make every piece of science that comes across our desks as brilliant, rigorous, and innovative as possible – thanks especially to Tobias Hauser, Peter Zeidman, Sam Schwarzkopf, and Francesco Rigoli for letting me distract you with endless questions – you guys make my research rock in so many ways. Thanks to John Greenwood, Joel Winston, Fred Dick, Martina Callaghan, John Ashburner, Gareth Barnes, Steve Fleming, Ray Dolan, Tristan Bekinschtein, Becky Lawson, Rimona Weil, and all of my amazing collaborators – here is to many future projects together! To the entire basement and first floor clan who put up with my outbursts and antics, you guys are the best and you made this time amazing. Thanks to Brianna, Sofia, Antonio, Eva, Elizabeth, Dan, Frederike, Phillip, Alex, Wen Wen, and everyone from the ICN who made Queen Square the coolest place in town, and who showed me how fun castles can be. To the amazing administration and scientific staff of the FIL, who truly make it the best place to work in neuroscience bar none. And finally – thanks to YOU, my readers and digital colleagues, for your support, your energy, and your enthusiasm, which make @Neuroconscience possible and help me push myself into ever bolder frontiers.

To Cambridge, and beyond!

Micah Galen Allen/@neuroconscience

Predictive coding and how the dynamical Bayesian brain achieves specialization and integration

Authors note: this marks the first in a new series of journal-entry style posts in which I write freely about things I like to think about. The style is meant to be informal and off the cuff, building towards a sort of socratic dialogue. Please feel free to argue or debate any point you like. These are meant to serve as exercises in writing and thinking,  to improve the quality of both and lay groundwork for future papers. 

My wife Francesca and I are spending the winter holidays vacationing in the north Italian countryside with her family. Today in our free time our discussions turned to how predictive coding and generative models can accomplish the multimodal perception that characterizes the brain. To this end Francesca asked a question we found particularly thought provoking: if the brain at all levels is only communicating forward what is not predicted (prediction error), how can you explain the functional specialization that characterizes the different senses? For example, if each sensory hierarchy is only communicating prediction errors, what explains their unique specialization in terms of e.g. the frequency, intensity, or quality of sensory inputs? Put another way, how can the different sensations be represented, if the entire brain is only communicating in one format?

We found this quite interesting, as it seems straightforward and yet the answer lies at the very basis of predictive coding schemes. To arrive at an answer we first had to lay a little groundwork in terms of information theory and basic neurobiology. What follows is a grossly oversimplified account of the basic neurobiology of perception, which serves only as a kind of philosopher’s toy example to consider the question. Please feel free to correct any gross misunderstandings.

To begin, it is clear at least according to Shannon’s theory of information, that any sensory property can be encoded in a simple system of ones and zeros (or nerve impulses). Frequency, time, intensity, and so on can all be re-described in terms of a simplistic encoding scheme. If this were not the case then modern television wouldn’t work. Second, each sensory hierarchy presumably  begins with a sensory effector, which directly transduces physical fluctuations into a neuronal code. For example, in the auditory hierarchy the cochlea contains small hairs that vibrate only to a particular frequency of sound wave. This vibration, through a complex neuro-mechanic relay, results in a tonitopic depolarization of first order neurons in the spiral ganglion.

f1
The human cochlea, a fascinating neural-mechanic apparatus to directly transduce air vibrations into neural representations.

It is here at the first-order neuron where the hierarchy presumably begins, and also where functional specialization becomes possible. It seems to us that predictive coding should say that the first neuron is simply predicting a particular pattern of inputs, which correspond directly to an expected external physical property. To try and give a toy example, say we present the brain with a series of tones, which reliably increase in frequency at 1 Hz intervals. At the lowest level the neuron will fire at a constant rate if the frequency at interval n is 1 greater than the previous interval, and will fire more or less if the frequency is greater or less than this basic expectation, creating a positive or negative prediction error (remember that the neuron should only alter its firing pattern if something unexpected happens). Since frequency here is being signaled directly by the mechanical vibration of the cochlear hairs; the first order neuron is simply predicting which frequency will be signaled. More realistically, each sensory neuron is probably only predicting whether or not a particular frequency will be signaled – we know from neurobiology that low-level neurons are basically tuned to a particular sensory feature, whereas higher level neurons encode receptive fields across multiple neurons or features. All this is to say that the first-order neuron is specialized for frequency because all it can predict is frequency; the only afferent input is the direct result of sensory transduction. The point here is that specialization in each sensory system arises in virtue of the fact that the inputs correspond directly to a physical property.

f2
Presumably, first order neurons predict the presence or absence of a particular, specialized sensory feature owing to their input. Credit: wikipedia.

Now, as one ascends higher in the hierarchy, each subsequent level is predicting the activity of the previous. The first-order neuron predicts whether a given frequency is presented, the second perhaps predicts if a receptive field is activated across several similarly tuned neurons, the third predicts a particular temporal pattern across multiple receptive fields, and so on. Each subsequent level is predicting a “hyperprior” encoding a higher order feature of the previous level. Eventually we get to a level where the prediction is no longer bound to a single sensory domain, but instead has to do with complex, non-linear interactions between multiple features. A parietal neuron thus might predict that an object in the world is a bird if it sings at a particular frequency and has a particular bodily shape.

f3
The motif of hierarchical message passing which encompasses the nervous system, according the the Free Energy principle.

If this general scheme is correct, then according to hierarchical predictive coding functional specialization primarily arises in virtue of the fact that at the lowest level each hierarchy is receiving inputs that strictly correspond to a particular feature. The cochlea is picking up fluctuations in air vibration (sound), the retina is picking up fluctuations in light frequency (light), and the skin is picking up changes in thermal amplitude and tactile frequency (touch). The specialization of each system is due to the fact that each is attempting to predict higher and higher order properties of those low-level inputs, which are by definition particular to a given sensory domain. Any further specialization in the hierarchy must then arise from the fact that higher levels of the brain predict inputs from multiple sensory systems – we might find multimodal object-related areas simply because the best hyper-prior governing nonlinear relationships between frequency and shape is an amodal or cross-model object. The actual etiology of higher-level modules is a bit more complicate than this, and requires an appeal to evolution to explain in detail, but we felt this was a generally sufficient explanation of specialization.

Nonlinearity of the world and perception: prediction as integration

At this point, we felt like we had some insight into how predictive coding can explain functional specialization without needing to appeal to special classes of cortical neurons for each sensation. Beyond the sensory effectors, the function of each system can be realized simply by means of a canonical, hierarchical prediction of each layered input, right down to the point of neurons which predict which frequency will be signaled. However, something still was missing, prompting Francesca to ask – how can this scheme explain the coherent, multi-modal, integrated perception, which characterizes conscious experience?

Indeed, we certainly do not experience perception as a series of nested predictions. All of the aforementioned machinery functions seamlessly beyond the point of awareness. In phenomenology a way to describe such influences is as being prenoetic (before knowing; see also prereflective); i.e. things that influence conscious experience without themselves appearing in experience. How then can predictive coding explain the transition from segregated, feature specific predictions to the unified percept we experience?

f4
When we arrange sensory hierarchies laterally, we see the “markov blanket” structure of the brain emerge. Each level predicts the control parameters of subsequent levels. In this way integration arises naturally from the predictive brain.

As you might guess, we already hinted at part of the answer. Imagine if instead of picturing each sensory hierarchy as an isolated pyramid, we instead arrange them such that each level is parallel to its equivalent in the ‘neighboring’ hierarchy. On this view, we can see that relatively early in each hierarchy you arrive at multi-sensory neurons that are predicting conjoint expectations over multiple sensory inputs. Conveniently, this observation matches what we actually know about the brain; audition, touch, and vision all converge in tempo-parietal association areas.

Perceptual integration is thus achieved as easily as specialization; it arises from the fact that each level predicts a hyperprior on the previous level. As one moves upwards through the hierarchy, this means that each level predicts more integrated, abstract, amodal entities. Association areas don’t predict just that a certain sight or sound will appear, but instead encode a joint expectation across both (or all) modalities. Just like the fusiform face area predicts complex, nonlinear conjunctions of lower-level visual features, multimodal areas predict nonlinear interactions between the senses.

f5
A half-cat half post, or a cat behind a post? The deep convolutional nature of the brain helps us solve this and similar nonlinear problems.

It is this nonlinearity that makes predictive schemes so powerful and attractive. To understand why, consider the task the brain must solve to be useful. Sensory impressions are not generated by simple linear inputs; certainly for perception to be useful to an organism it must process the world at a level that is relevant for that organism. This is the world of objects, persons, and things, not disjointed, individual sensory properties. When I watch a cat walk behind a fence, I don’t perceive it as two halves of a cat and a fence post, but rather as a cat hidden behind a fence. These kinds of nonlinear interactions between objects and properties of the world are ubiquitous in perception; the brain must solve not for the immediately available sensory inputs but rather the complex hidden causes underlying them. This is achieved in a similar manner to a deep convolutional network; each level performs the same canonical prediction, yet together the hierarchy will extract the best-hidden features to explain the complex interactions that produce physical sensations. In this way the predictive brain summersaults the binding problem of perception; perception is integrated precisely because conjoint hypothesis are better, more useful explanations than discrete ones. As long as the network has sufficient hierarchical depth, it will always arrive at these complex representations. It’s worth noting we can observe the flip-side of this process in common visual illusions, where the higher-order percept or prior “fills in” our actual sensory experience (e.g. when we perceive a convex circle as being lit from above).

teaser-convexconcave-01
Our higher-level, integrative priors “fill in” our perception.

Beating the homunculus: the dynamic, enactive Bayesian brain

Feeling satisfied with this, Francesca and I concluded our fun holiday discussion by thinking about some common misunderstandings this scheme might lead one into. For example, the notion of hierarchical prediction explored above might lead one to expect that there has to be a “top” level, a kind of super-homunculus who sits in the prefrontal cortex, predicting the entire sensorium. This would be an impossible solution; how could any subsystem of the brain possibly predict the entire activity of the rest? And wouldn’t that level itself need to be predicted, to be realised in perception, leading to infinite regress? Luckily the intuition that these myriad hypotheses must “come together” fundamentally misunderstands the Bayesian brain.

Remember that each level is only predicting the activity of that before it. The integrative parietal neuron is not predicting the exact sensory input at the retina; rather it is only predicting what pattern of inputs it should receive if the sensory input is an apple, or a bat, or whatever. The entire scheme is linked up this way; the individual units are just stupid predictors of immediate input. It is only when you link them all up together in a deep network, that the brain can recapitulate the complex web of causal interactions that make up the world.

This point cannot be stressed enough: predictive coding is not a localizationist enterprise. Perception does not come about because a magical brain area inverts an entire world model. It comes about in virtue of the distributed, dynamic activity of the entire brain as it constantly attempts to minimize prediction error across all levels. Ultimately the “model” is not contained “anywhere” in the brain; the entire brain itself, and the full network of connection weights, is itself the model of the world. The power to predict complex nonlinear sensory causes arises because the best overall pattern of interactions will be that which most accurately (or usefully) explains sensory inputs and the complex web of interactions which causes them. You might rephrase the famous saying as “the brain is it’s own best model of the world”.

As a final consideration, it is worth noting some misconceptions may arise from the way we ourselves perform Bayesian statistics. As an experimenter, I formalize a discrete hypothesis (or set of hypotheses) about something and then invert that model to explain data in a single step. In the brain however the “inversion” is just the constant interplay of input and feedback across the nervous system at all levels. In fact, under this distributed view (at least according to the Free Energy Principle), neural computation is deeply embodied, as actions themselves complete the inferential flow to minimize error. Thus just like neural feedback, actions function as  ‘predictions’, generated by the inferential mechanism to render the world more sensible to our predictions. This ultimately minimises prediction error just as internal model updates do, albeit in a different ‘direction of fit’ (world to model, instead of model to world). In this way the ‘model’ is distributed across the brain and body; actions themselves are as much a part of the computation as the brain itself and constitute a form of “active inference”. In fact, if one extends their view to evolution, the morphological shape of the organism is itself a kind of prior, predicting the kinds of sensations, environments, and actions the agent is likely to inhabit. This intriguing idea will be the subject of a future blog post.

Conclusion

We feel this is an extremely exciting view of the brain. The idea that an organism can achieve complex intelligence simply by embedding a simple repetitive motif within a dynamical body seems to us to be a fundamentally novel approach to the mind. In future posts and papers, we hope to further explore the notions introduced here, considering questions about “where” these embodied priors come from and what they mean for the brain, as well as the role of precision in integration.

Questions? Comments? Feel like i’m an idiot? Sound off in the comments!

Further Reading:

Brown, H., Adams, R. A., Parees, I., Edwards, M., & Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognitive Processing, 14(4), 411–427. http://doi.org/10.1007/s10339-013-0571-3
Feldman, H., & Friston, K. J. (2010). Attention, Uncertainty, and Free-Energy. Frontiers in Human Neuroscience, 4. http://doi.org/10.3389/fnhum.2010.00215
Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Perceptions as Hypotheses: Saccades as Experiments. Frontiers in Psychology, 3. http://doi.org/10.3389/fpsyg.2012.00151
Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1211–1221. http://doi.org/10.1098/rstb.2008.0300
Friston, K., Thornton, C., & Clark, A. (2012). Free-Energy Minimization and the Dark-Room Problem. Frontiers in Psychology, 3. http://doi.org/10.3389/fpsyg.2012.00130
Moran, R. J., Campo, P., Symmonds, M., Stephan, K. E., Dolan, R. J., & Friston, K. J. (2013). Free Energy, Precision and Learning: The Role of Cholinergic Neuromodulation. The Journal of Neuroscience, 33(19), 8227–8236. http://doi.org/10.1523/JNEUROSCI.4255-12.2013

 

Monitoring the mind: clues for a link between meta cognition and self generated thought

Jonny Smallwood, one of my PhD mentors, just posted an interesting overview of some of his recent work on mind-wandering and metacognition (including our Frontiers paper). Check it out!

The Mind Wanders

It is a relatively common experience to lose track of what one is doing: We may stop following what someone is saying during conversation, enter a room and realise we have forgotten why we came in, or lose the thread of our own thoughts leaving us with a sense that we had reached a moment of insight that is now lost forever. One important influence on making sure that we can stay on target to achieve our goals is the capacity for meta-cognition, or the ability to accurately assess our own cognitive experience. Meta cognition is important because it allows us the opportunity to correct for errors if and when they occur. I have recently become interested in this capacity for accurately assessing the contents of thought and along with two different groups of collaborators have begun to explore its neural basis.

We were interested in whether meta-cognition is a…

View original post 1,192 more words

Mind-wandering and metacognition: variation between internal and external thought predicts improved error awareness

Yesterday I published my first paper on mind-wandering and metacognition, with Jonny Smallwood, Antoine Lutz, and collaborators. This was a fun project for me as I spent much of my PhD exhaustively reading the literature on mind-wandering and default mode activity, resulting in a lot of intense debate a my research center. When we had Jonny over as an opponent at my PhD defense, the chance to collaborate was simply too good to pass up. Mind-wandering is super interesting precisely because we do it so often. One of my favourite anecdotes comes from around the time I was arguing heavily for the role of the default mode in spontaneous cognition to some very skeptical colleagues.  The next day while waiting to cross the street, one such colleague rode up next to me on his bicycle and joked, “are you thinking about the default mode?” And indeed I was – meta-mind-wandering!

One thing that has really bothered me about much of the mind-wandering literature is how frequently it is presented as attention = good, mind-wandering = bad. Can you imagine how unpleasant it would be if we never mind-wandered? Just picture trying to solve a difficult task while being totally 100% focused. This kind of hyper-locking attention can easily become pathological, preventing us from altering course when our behaviour goes awry or when something internal needs to be adjusted. Mind-wandering serves many positive purposes, from stimulating our imaginations, to motivating us in boring situations with internal rewards (boring task… “ahhhh remember that nice mojito you had on the beach last year?”). Yet we largely see papers exploring the costs – mood deficits, cognitive control failure, and so on. In the meditation literature this has even been taken up to form the misguided idea that meditation should reduce or eliminate mind-wandering (even though there is almost zero evidence to this effect…)

Sometimes our theories end up reflecting our methodological apparatus, to the extent that they may not fully capture reality. I think this is part of what has happened with mind-wandering, which was originally defined in relation to difficult (and boring) attention tasks. Worse, mind-wandering is usually operationalized as a dichotomous state (“offtask” vs “ontask”) when a little introspection seems to strongly suggest it is much more of a fuzzy, dynamic transition between meta-cognitive and sensory processes. By studying mind-wandering just as the ‘amount’ (or mean) number of times you were “offtask”, we’re taking the stream of consciousness and acting as if the ‘depth’ at one point in the river is the entire story – but what about flow rate, tidal patterns, fishies, and all the dynamic variability that define the river? My idea was that one simple way get at this is by looking at the within-subject variability of mind-wandering, rather than just the overall mean “rate”.  In this way we could get some idea of the extent to which a person’s mind-wandering was fluctuating over time, rather than just categorising these events dichotomously.

The EAT task used in my study, with thought probes.
The EAT task used in my study, with thought probes.

To do this, we combined a classical meta-cognitive response inhibition paradigm, the “error awareness task” (pictured above), with standard interleaved “thought-probes” asking participants to rate on a scale of 1-7 the “subjective frequency” of task-unrelated thoughts in the task interval prior to the probe.  We then examined the relationship between the ability to perform the task or “stop accuracy” and each participant’s mean task-unrelated thought (TUT). Here we expected to replicate the well-established relationship between TUTs and attention decrements (after all, it’s difficult to inhibit your behaviour if you are thinking about the hunky babe you saw at the beach last year!). We further examined if the standard deviation of TUT (TUT variability) within each participant would predict error monitoring, reflecting a relationship between metacognition and increased fluctuation between internal and external cognition (after all, isn’t that kind of the point of metacognition?). Of course for specificity and completeness, we conducted each multiple regression analysis with the contra-variable as control predictors. Here is the key finding from the paper:

Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.
Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.

As you can see in the bottom right, we clearly replicated the relationship of increased overall TUT predicting poorer stop performance. Individuals who report an overall high intensity/frequency of mind-wandering unsurprisingly commit more errors. What was really interesting, however, was that the more variable a participants’ mind-wandering, the greater error-monitoring capacity (top left). This suggests that individuals who show more fluctuation between internally and externally oriented attention may be able to better enjoy the benefits of mind-wandering while simultaneously limiting its costs. Of course, these are only individual differences (i.e. correlations) and should be treated as highly preliminary. It is possible for example that participants who use more of the TUT scale have higher meta-cognitive ability in general, rather than the two variables being causally linked in the way we suggest.  We are careful to raise these and other limitations in the paper, but I do think this finding is a nice first step.

To ‘probe’ a bit further we looked at the BOLD responses to correct stops, and the parametric correlation of task-related BOLD with the TUT ratings:

Activations during correct stop trials.
Activations during correct stop trials.
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)

As you can see, correct stop trials elicit a rather canonical activation pattern on the motor-inhibition and salience networks, with concurrent deactivations in visual cortex and the default mode network (second figure, blue blobs). I think of this pattern a bit like when the brain receives the ‘stop signal’ it goes, (a la Picard): “FULL STOP, MAIN VIEWER OFF, FIRE THE PHOTON TORPEDOS!”, launching into full response recovery mode. Interestingly, while we replicated the finding of medial-prefrontal co-variation with TUTS (second figure, red blob), this area was substantially more rostral than the stop-related deactivations, supporting previous findings of some degree of functional segregation between the inhibitory and mind-wandering related components of the DMN.

Finally, when examining the Aware > Unaware errors contrast, we replicated the typical salience network activations (mid-cingulate and anterior insula). Interestingly we also found strong bilateral activations in an area of the inferior parietal cortex also considered to be a part of the default mode. This finding further strengthens the link between mind-wandering and metacognition, indicating that the salience and default mode network may work in concert during conscious error awareness:

Activations to Aware > Unaware errors contrast.
Activations to Aware > Unaware errors contrast.

In all, this was a very valuable and fun study for me. As a PhD student being able to replicate the function of classic “executive, salience, and default mode” ‘resting state’ networks with a basic task was a great experience, helping me place some confidence in these labels.  I was also able to combine a classical behavioral metacognition task with some introspective thought probes, and show that they do indeed contain valuable information about task performance and related brain processes. Importantly though, we showed that the ‘content’ of the mind-wandering reports doesn’t tell the whole story of spontaneous cognition. In the future I would like to explore this idea further, perhaps by taking a time series approach to probe the dynamics of mind-wandering, using a simple continuous feedback device that participants could use throughout an experiment. In the affect literature such devices have been used to probe the dynamics of valence-arousal when participants view naturalistic movies, and I believe such an approach could reveal even greater granularity in how the experience of mind-wandering (and it’s fluctuation) interacts with cognition. Our findings suggest that the relationship between mind-wandering and task performance may be more nuanced than mere antagonism, an important finding I hope to explore in future research.

Citation: Allen M, Smallwood J, Christensen J, Gramm D, Rasmussen B, Jensen CG, Roepstorff A and Lutz A (2013) The balanced mind: the variability of task-unrelated thoughts predicts error monitoringFront. Hum. Neurosci7:743. doi: 10.3389/fnhum.2013.00743

Birth of a New School: How Self-Publication can Improve Research

Edit: click here for a PDF version and citable figshare link!

Preface: What follows is my attempt to imagine a radically different future for research publishing. Apologies for any overlooked references – the following is meant to be speculative and purposely walks the line between paper and blog post. Here is to a productive discussion regarding the future of research.

Our current systems of producing, disseminating, and evaluating research could be substantially improved. For-profit publishers enjoy extremely high taxpayer-funded profit margins. Traditional closed-door peer review is creaking under the weight of an exponentially growing knowledge base, delaying important communications and often resulting in seemingly arbitrary publication decisions1–4. Today’s young researchers are frequently dismayed to find their pain-staking work producing quality reviews overlooked or discouraged by journalistic editorial practices. In response, the research community has risen to the challenge of reform, giving birth to an ever expanding multitude of publishing tools: statistical methods to detect p-hacking5, numerous open-source publication models6–8, and innovative platforms for data and knowledge sharing9,10.

While I applaud the arrival and intent of these tools, I suspect that ultimately publication reform must begin with publication culture – with the very way we think of what a publication is and can be. After all, how can we effectively create infrastructure for practices that do not yet exist? Last summer, shortly after igniting #pdftribute, I began to think more and more about the problems confronting the publication of results. After months of conversations with colleagues I am now convinced that real reform will come not in the shape of new tools or infrastructures, but rather in the culture surrounding academic publishing itself. In many ways our current publishing infrastructure is the product of a paper-based society keen to produce lasting artifacts of scholarly research. In parallel, the exponential arrival of networked society has lead to an open-source software community in which knowledge is not a static artifact but rather an ever-expanding living document of intelligent productivity. We must move towards “research 2.0” and beyond11.

From Wikipedia to Github, open-source communities are changing the way knowledge is produced and disseminated. Already this movement has begun reach academia, with researchers across disciplines flocking to social media, blogs, and novel communication infrastructures to create a new movement of post-publication peer review4,12,13. In math and physics, researchers have already embraced self-publication, uploading preprints to the online repository arXiv, with more and more disciplines using the site to archive their research. I believe that the inevitable future of research communication is in this open-source metaphor, in the form of pervasive self-publication of scholarly knowledge. The question is thus not where are we going, but rather how do we prepare for this radical change in publication culture. In asking these questions I would like to imagine what research will look like 10, 15, or even 20 years from today. This post is intended as a first step towards bringing to light specific ideas for how this transition might be facilitated. Rather than this being a prescriptive essay, here I am merely attempting to imagine what that future may look like. I invite you to treat what follows as an ‘open beta’ for these ideas.

Part 1: Why self-publication?

I believe the essential metaphor is within the open-source software community. To this end over the past few months I have  feverishly discussed the merits and risks of self-publishing scholarly knowledge with my colleagues and peers. While at first I was worried many would find the notion of self-publication utterly absurd, I have been astonished at the responses – many have been excitedly optimistic! I was surprised to find that some of my most critical and stoic colleagues have lost so much faith in traditional publication and peer review that they are ready to consider more radical options.

The basic motivation for research self-publication is pretty simple: research papers cannot be properly evaluated without first being read. Now, by evaluation, I don’t mean for the purposes of hiring or grant giving committees. These are essentially financial decisions, e.g. “how do I effectively spend my money without reading the papers of the 200+ applicants for this position?” Such decisions will always rely on heuristics and metrics that must necessarily sacrifice accuracy for efficiency. However, I believe that self-publication culture will provide a finer grain of metrics than ever dreamed of under our current system. By documenting each step of the research process, self-publication and open science can yield rich information that can be mined for increasingly useful impact measures – but more on that later.

When it comes to evaluating research, many admit that there is no substitute for opening up an article and reading its content – regardless of journal. My prediction is, as post-publication peer review gains acceptance, some tenured researcher or brave young scholar will eventually decide to simply self-publish her research directly onto the internet, and when that research goes viral, the resulting deluge of self-publications will be overwhelming. Of course, busy lives require heuristic decisions and it’s arguable that publishers provide this editorial service. While I will address this issue specifically in Part 3, for now I want to point out that growing empirical evidence suggests that our current publisher/impact-based system provides an unreliable heuristic at best14–16. Thus, my essential reason for supporting self-publication is that in the worst-case scenario, self-publications must be accompanied by the disclaimer: “read the contents and decide for yourself.” As self-publishing practices are established, it is easy to imagine that these difficulties will be largely mitigated by self-published peer reviews and novel infrastructures supporting these interactions.

Indeed, with a little imagination we can picture plenty of potential benefits of self-publication to offset the risk that we might read poor papers. Researchers spend exorbitant amounts of their time reviewing, commenting on, and discussing articles – most of that rich content and meta-data is lost under the current system. In documenting the research practice more thoroughly, the ensuing flood of self-published data can support new quantitative metrics of reviewer trust, and be further utlized in the development of rich information about new ideas and data in near real-time. To give just one example, we might calculate how many subsequent citations or retractions a particular reviewer generates, generating a reviewer impact factor and reliability index. The more aspects of research we publish, the greater the data-mining potential. Incentivizing in-depth reviews that add clarity and conceptual content to research, rather than merely knocking down or propping up equally imperfect artifacts, will ultimately improve research quality. By self-publishing well-documented, open-sourced pilot data and accompanying digital reagents (e.g. scripts, stimulus materials, protocols, etc), researchers can get instant feedback from peers, preventing uncounted research dollars from being wasted. Previously closed-door conferences can become live records of new ideas and conceptual developments as they unfold. The metaphor here is research as open-source – an ever evolving, living record of knowledge as it is created.

Now, let’s contrast this model to the current publishing system. Every publisher (including open-access) obliges researchers to adhere to randomly varied formatting constraints, presentation rules, submission and acceptance fees, and review cultures. Researchers perform reviews for free for often publically subsidized work, so that publishers can then turn around and sell the finished product back to those same researchers (and the public) at an exorbitant mark-up. These constraints introduce lengthy delays – ranging from 6+ months in the sciences all the way up to two years in some humanities disciplines. By contrast, how you self-publish your research is entirely up to you – where, when, how, the formatting, and the openness. Put simply, if you could publish your research how and when you wanted, and have it generate the same “impact” as traditional venues, why would you use a publisher at all?

One obvious reason to use publishers is copy-editing, i.e. the creation of pretty manuscripts. Another is the guarantee of high-profile distribution. Indeed, under the current system these are legitimate worries. While it is possible to produce reasonably formatted papers, ideally the creation of an open-source, easy to use copy-editing software is needed to facilitate mainstream self-publication. Innovators like figshare are already leading the way in this area. In the next section, I will try to theorize some different ways in which self-publication can overcome these and other potential limitations, in terms of specific applications and guidelines for maximizing the utility of self-published research. To do so, I will outline a few specific cases with the most potential for self-publication to make a positive impact on research right away, and hopefully illuminate the ‘why’ question a bit further with some concrete examples.

 Part 2: Where to begin self-publishing

What follows is the “how-to” part of this document. I must preface by saying that although I have written so far with researchers across the sciences and humanities in mind, I will now focus primarily on the scientific examples with which I am more experienced.  The transition to self-publication is already happening in the forms of academic tweets, self-archives, and blogs, at a seemingly exponential growth rate. To be clear, I do not believe that the new publication culture will be utopian. As in many human endeavors the usual brandism3, politics, and corruption can be expected to appear in this new culture. Accordingly, the transition is likely to be a bit wild and woolly around the edges. Like any generational culture shift, new practices must first emerge before infrastructures can be put in place to support them. My hope is to contribute to that cultural shift from artifact to process-based research, outlining particularly promising early venues for self-publication. Once these practices become more common, there will be huge opportunities for those ready and willing to step in and provide rich informational architectures to support and enhance self-publication – but for now we can only step into that wild frontier.

In my discussions with others I have identified three particularly promising areas where self-publication is either already contributing or can begin contributing to research. These are: the publication of exploratory pilot-data, post-publication peer reviews, and trial pre-registration. I will cover each in turn, attempting to provide examples and templates where possible. Finally, Part 3 will examine some common concerns with self-publication. In general, I think that successful reforms should resemble existing research practices as much as possible: publication solutions are most effective when they resemble daily practices that are already in place, rather than forcing individuals into novel practices or infrastructures with an unclear time-commitment. A frequent criticism of current solutions such as the comments section on Frontiers, PLOS One, or the newly developed PubPeer, is that they are rarely used by the general academic population. It is reasonable to conclude that this is because already over-worked academics currently see little plausible benefit from contributing to these discussions given the current publishing culture (worse still, they may fear other negative repercussions, discussed in Part 3). Thus a central theme of the following examples is that they attempt to mirror practices in which many academics are already engaged, with complementary incentive structures (e.g. citations).

Example 1: Exploratory Pilot Data 

This previous summer witnessed a fascinating clash of research cultures, with the eruption of intense debate between pre-registration advocates and pre-registration skeptics. I derived some useful insights from both sides of that discussion. Many were concerned about what would happen to exploratory data under these new publication regimes. Indeed, a general worry with existing reform movements is that they appear to emphasize a highly conservative and somewhat cynical “perfect papers” culture. I do not believe in perfect papers – the scientific model is driven by replication and discovery. No paper can ever be 100% flawless – otherwise there would be no reason for further research! Inevitably, some will find ways to cheat the system. Accordingly, reform must incentivize better reporting practices over stricter control, or at least balance between the two extremes.

Exploratory pilot data is an excellent avenue for this. By their very nature such data are not confirmatory – they are exciting in that they do not conform well to prior predictions. Such data benefit from rapid communication and feedback. Imagine an intuition-based project – a side or pet project conducted on the fly for example. The researcher might feel that the project has potential, but also knows that there could be serious flaws. Most journals won’t publish these kinds of data. Under the current system these data are lost, hidden, obscured, or otherwise forgotten.

Compare to a self-publication world: the researcher can upload the data, document all the protocols, make the presentation and analysis scripts open-source, and provide some well-written documentation explaining why she thinks the data are of interest. Some intrepid graduate student might find it, and follow up with a valuable control analysis, pointing out an excellent feature or fatal flaw, which he can then upload as a direct citation to the original data. Both publications are citable, giving credit to originator and reviewer alike. Armed with this new knowledge, the original researcher could now pre-register an altered protocol and conduct a full study on the subject (or alternatively, abandon the project entirely). In this exchange, it is likely that hundreds of hours and research dollars will have been saved. Additionally, the entire process will have been documented, making it both citable and minable for impact metrics. Tools already exist for each of these steps – but largely cultural fears prevent it from happening. How would it be perceived? Would anyone read it? Will someone steal my idea? To better frame these issues, I will now examine a self-publication practice that has already emerged in force.

 Example 2: Post-publication peer review

This is a particularly easy case, precisely because high-profile scholars are already regularly engaged in the practice. As I’ve frequently joked on twitter, we’re rapidly entering an era where publishing in a glam-mag has no impact guarantee if the paper itself isn’t worthwhile – you may as well hang a target on your head for post-publication peer reviewers. However, I want to emphasize the positive benefits and not just the conservative controls. Post-publication peer review (PPPR) has already begun to change the way we view research, with reviewers adding lasting content to papers, enriching the conclusions one can draw, and pointing out novel connections that were not extrapolated upon by the authors themselves. Here I like to draw an analogy to the open source movement, where code (and its documentation) is forkable, versioned, and open to constant revision – never static but always evolving.

Indeed, just last week PubMed launched their new “PubMed Commons” system, an innovative PPPR comment system, whereby any registered person (with at least one paper on PubMed) can leave scientific comments on articles.  Inevitably, the reception on twitter and Facebook mirrored previous attempts to introduce infrastructure-based solutions – mixed excitement followed by a lot of bemused cynicism – bring out the trolls many joked. To wit, a brief scan of the average comment on another platform, PubPeer, revealed a generally (but not entirely) poor level of comment quality. While many comments seem to be on topic, most had little to no formatting and were given with little context. At times comments can seem trollish, pointing out minor flaws as if they render the paper worthless. In many disciplines like my own, few comments could be found at all. This compounds the central problem with PPPR; why would anyone acknowledge such a system if the primary result is poorly formed nitpicking of your research? The essential problem here is again incentive – for reviews to be quality there needs to be incentive. We need a culture of PPPR that values positive and negative comments equally. This is common to both traditional and self-publication practices.

To facilitate easy, incentivized self-publication of comments and PPPRs, my colleague Hauke Hillebrandt and I have attempted to create a simple template that researchers can use to quickly and easily publish these materials. The idea is that by using these templates and uploading them to figshare or similar services, Google Scholar will automatically index them as citations, provide citation alerts to the original authors, and even include the comments in its h-index calculation. This way researchers can begin to get credit for what they are already doing, in an easy to use and familiar format. While the template isn’t quite working yet (oddly enough, Scholar is counting citations from my blog, but not the template), you can take a look at it here and maybe help us figure out why it isn’t working! In the near future we plan to get this working, and will follow-up this post with the full template, ready for you to use.

Example 3: Pre-registration of experimental trials

As my final example, I suggest that for many researchers, self-publication of trial pre-registrations (PR) may be an excellent way to test the waters of PR in a format with a low barrier to entry. Replication attempts are a particularly promising venue for PR, and self-publication of such registrations is a way to quickly move from idea to registration to collection (as in the above pilot data example), while ensuring that credit for the original idea is embedded in the infamously hard to erase memory of the internet.

A few benefits of PR self-publication, rather than relying on for-profit publishers, is that PR templates can be easily open-sourced themselves, allowing various research fields to generate community-based specialized templates adhering to the needs of that field. Self-published PRs, as well as high quality templates, can be cited – incentivizing the creation and dissemination of both. I imagine the rapid emergence of specialized templates within each community, tailored to the needs of that research discipline.

Part 3: Criticism and limitations

Here I will close by considering some common concerns with self-publication:

Quality of data

A natural worry at this point is quality control. How can we be sure that what is published without the seal of peer review isn’t complete hooey? The primary response is that we cannot, just like we cannot be sure that peer reviewed materials are quality without first reading them ourselves. Still, it is for this reason that I tried to suggest a few particularly ripe venues for self-publication of research. The cultural zeitgeist supporting full-blown scholarly self-publication has not yet arrived, but we can already begin to prepare for it. With regards to filtering noise, I argue that by coupling post-publication peer review and social media, quality self-publications will rise to the top. Importantly, this issue points towards flaws in our current publication culture. In many research areas there are effects that are repeatedly published but that few believe, largely due to the presence of biases against null-findings. Self-publication aims to make as much of the research process publicly available as possible, preventing this kind of knowledge from slipping through the editorial cracks and improving our ability to evaluate the veracity of published effects. If such data are reported cleanly and completely, existing quantitative tools can further incorporate them to better estimate the likelihood of p-hacking within a literature. That leads to the next concern – quality of presentation.

Hemingway's thoughts on data.

Quality of presentation

Many ask: how in this brave new world will we separate signal from noise? I am sure that every published researcher already receives at least a few garbage citations a year from obscure places in obscure journals with little relevance to actual article contents. But, so the worry goes, what if we are deluged with a vast array of poorly written, poorly documented, self-published crud. How would we separate the signal from the noise?

 The answer is Content, Presentation, and Clarity. These must be treated as central guidelines for self-publication to be worth anyone’s time. The Internet memesphere has already generated one rule for ranking interest: content rules. Content floats and is upvoted, blogspam sinks and is downvoted. This is already true for published articles – twitter, reddit, facebook, and email circles help us separate the wheat from the chaff at least as much as impact factor if not more. But presentation and clarity are equally important. Poorly conducted research is not shared, or at least is shared with vehemence. Similarly, poorly written self-publications, or poorly documented data/reagents are unlikely to generate positive feedback, much less impact-generating eyeballs. I like to imagine a distant future in which self-publication has given rise to a new generation of well-regarded specialists: reviewers who are prized for their content, presentation, and clarity; coders who produce cleanly documented pipelines; behaviorists producing powerful and easily customized paradigm scripts; and data collection experts who produce the smoothest, cleanest data around. All of these future specialists will be able to garner impact for the things they already do, incentivizing each step of the research processes rather than only the end product.

Being scooped, intellectual credit

Another common concern is “what if my idea/data/pilot is scooped?” I acknowledge that particularly in these early days, the decision to self-publish must be weighted against this possibility. However, I must also point out that in the current system authors must also weight the decision to develop an idea in isolation against the benefits of communicating with peers and colleagues. Both have risks and benefits – an idea or project in isolation can easily over-estimate its own quality or impact. The decision to self-publish must similarly be weighted against the need for feedback. Furthermore, a self-publication culture would allow researchers to move more quickly from project to publication, ensuring that they are readily credited for their work. And again, as research culture continues to evolve, I believe this concern will increasingly fade. It is notoriously difficult to erase information from The Internet (see the “Streisand effect”) – there is no reason why self-published ideas and data cannot generate direct credit for the authors. Indeed, I envision a world in which these contributions can themselves be independently weighted and credited.

 Prevention of cheating, corruption, self-citations

To some, this will be an inevitable point of departure. Without our time-tested guardian of peer review, what is to prevent a flood of outright fabricated data? My response is: what prevents outright fabrication under the current system? To misquote Jeff Goldblum in Jurassic Park, cheaters will always find a way. No matter how much we tighten our grip, there will be those who respond to the pressures of publication by deliberate misconduct. I believe that the current publication system directly incentivizes such behavior by valuing end product over process. By creating incentives for low-barrier post-publication peer review, pre-registration, and rich pilot data publication, researchers are given the opportunity to generate impact for each step of the research process. When faced with the vast penalties of cheating due to a null finding, versus doing one’s best to turn those data into something useful for someone, I suspect most people will choose the honest and less risky option.

 Corruption and self-citations are perhaps a subtler, more sinister factor. In my discussions with colleagues, a frequent concern is that there is nothing to prevent high-impact “rich club” institutions from banding together to provide glossy post-publication reviews, citation farming, or promoting one another’s research to the top of the pile regardless of content. I again answer: how is this any different from our current system? Papers are submitted to an editor who makes a subjective evaluation of the paper’s quality and impact, before sending it to four out of a thousand possible reviewers who will make an obscure  decision about the content of the paper. Sometimes this system works well, but increasingly it does not2. Many have witnessed great papers rejected for political reasons, or poor ones accepted for the same. Lowering the barrier to post-publication peer review means that even when these factors drive a paper to the top, it will be far easier to contextualize that research with a heavy dose of reality. Over time, I believe self-publication will incentivize good research. Cheating will always be a factor – and this new frontier is unlikely to be a utopia. Rather, I hope to contribute to the development of a bridge between our traditional publishing models and a radically advanced not-too-distant future.

Conclusion

Our current systems of producing, disseminating, and evaluating research increasingly seem to be out of step with cultural and technological realities. To take back the research process and bolster the ailing standard of peer-review I believe research will ultimately adopt an open and largely publisher-free model. In my view, these new practices will be entirely complementary to existing solutions including such as the p-curve5, open-source publication models6–8, and innovative platforms for data and knowledge sharing such as PubPeer, PubMed Commons, and figshare9,10. The next step from here will be to produce useable templates for self-publication. You can expect to see a PDF version of this post in the coming weeks as a further example of self-publishing practices. In attempting to build a bridge to the coming technological and social revolution, I hope to inspire others to join in the conversation so that we can improve all aspects of research.

 Acknowledgments

Thanks to Hauke Hillebrandt, Kate Mills, and Francesca Fardo for invaluable discussion, comments, and edits of this work. Many of the ideas developed here were originally inspired by this post envisioning a self-publication future. Thanks also to PubPeer, PeerJ,  figshare, and others in this area for their pioneering work in providing some valuable tools and spaces to begin engaging with self-publication practices.

Addendum

Excellent resources already exist for the many of the ideas presented here. I want to give special notice to researchers who have already begun self-publishing their work either as preprints, archives, or as direct blog posts. Parallel publishing is an attractive transitional option where researchers can prepublish their work for immediate feedback before submitting it to a traditional publisher. Special notice should be given to Zen Faulkes whose excellent pioneering blog posts demonstrated that it is reasonably easy to self-produce well formatted publications. Here are a few pioneering self-published papers you can use as examples – feel free to add your own in the comments:

The distal leg motor neurons of slipper lobsters, Ibacus spp. (Decapoda, Scyllaridae), Zen Faulkes

http://neurodojo.blogspot.dk/2012/09/Ibacus.html

Eklund, Anders (2013): Multivariate fMRI Analysis using Canonical Correlation Analysis instead of Classifiers, Comment on Todd et al. figshare.

http://dx.doi.org/10.6084/m9.figshare.787696

Automated removal of independent components to reduce trial-by-trial variation in event-related potentials, Dorothy Bishop

http://bishoptechbits.blogspot.dk/2011_05_01_archive.html

Deep Impact: Unintended consequences of journal rank

Björn Brembs, Marcus Munafò

http://arxiv.org/abs/1301.3748

A novel platform for open peer to peer review and publication:

http://thewinnower.com/

A platform for open PPPRs:

https://pubpeer.com/

Another PPPR platform:

http://f1000.com/

References

1. Henderson, M. Problems with peer review. BMJ 340, c1409 (2010).

2. Ioannidis, J. P. A. Why Most Published Research Findings Are False. PLoS Med 2, e124 (2005).

3. Peters, D. P. & Ceci, S. J. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behav. Brain Sci. 5, 187 (2010).

4. Hunter, J. Post-publication peer review: opening up scientific conversation. Front. Comput. Neurosci. 6, 63 (2012).

5. Simonsohn, U., Nelson, L. D. & Simmons, J. P. P-Curve: A Key to the File Drawer. (2013). at <http://papers.ssrn.com/abstract=2256237>

6.  MacCallum, C. J. ONE for All: The Next Step for PLoS. PLoS Biol. 4, e401 (2006).

7. Smith, K. A. The frontiers publishing paradigm. Front. Immunol. 3, 1 (2012).

8. Wets, K., Weedon, D. & Velterop, J. Post-publication filtering and evaluation: Faculty of 1000. Learn. Publ. 16, 249–258 (2003).

9. Allen, M. PubPeer – A universal comment and review layer for scholarly papers? | Neuroconscience on WordPress.com. Website/Blog (2013). at <http://neuroconscience.com/2013/01/25/pubpeer-a-universal-comment-and-review-layer-for-scholarly-papers/>

10. Hahnel, M. Exclusive: figshare a new open data project that wants to change the future of scholarly publishing. Impact Soc. Sci. blog (2012). at <http://eprints.lse.ac.uk/51893/1/blogs.lse.ac.uk-Exclusive_figshare_a_new_open_data_project_that_wants_to_change_the_future_of_scholarly_publishing.pdf>

11. Yarkoni, T., Poldrack, R. A., Van Essen, D. C. & Wager, T. D. Cognitive neuroscience 2.0: building a cumulative science of human brain function. Trends Cogn. Sci. 14, 489–496 (2010).

12. Bishop, D. BishopBlog: A gentle introduction to Twitter for the apprehensive academic. Blog/website (2013). at <http://deevybee.blogspot.dk/2011/06/gentle-introduction-to-twitter-for.html>

13. Hadibeenareviewer. Had I Been A Reviewer on WordPress.com. Blog/website (2013). at <http://hadibeenareviewer.wordpress.com/>

14. Tressoldi, P. E., Giofré, D., Sella, F. & Cumming, G. High Impact = High Statistical Standards? Not Necessarily So. PLoS One 8, e56180 (2013).

15.  Brembs, B. & Munafò, M. Deep Impact: Unintended consequences of journal rank. (2013). at <http://arxiv.org/abs/1301.3748>

16.  Eisen, J. A., Maccallum, C. J. & Neylon, C. Expert Failure: Re-evaluating Research Assessment. PLoS Biol. 11, e1001677 (2013).

http://wl.figshare.com/articles/875339/embed?show_title=1

Publications

See also https://scholar.google.co.uk/citations?user=C49AeHAAAAAJ&hl=en

Forthcoming

*indicates shared first author. 

Allen, M., Hauser, T. U., Fleming, S. M., Schwarzkopf, D. S., Glen, J. C., & Rees, G. (In Preparation). Autonomic and neuroanatomical underpinnings of the precision-weighted confidence bias.

Allen, M., & Fletcher, P. C. (In Preparation). Embodied Computational Psychiatry.

Allen, M., & Rees, G. (In Preparation). Metacognition as Self-Inference.

*Allen, M., *Vatanserver, D., Karapanagiotidis, T., & Smallwood, J. (In Preparation). Effective connectivity reveals within and between-network signatures of self-generated thought.

 

2018

Owens A., Allen, M., Ondobaka, S., Friston, KJ. (2018). Interoceptive inference: from computational neuroscience to clinic. Neurosci. Biobehav. Rev., https://doi.org/10.1016/j.neubiorev.2018.04.017

Hauser, T. U., Allen, M., Rees, G., & Dolan, R. J. (2018). Metacognitive impairments extend perceptual weaknesses in compulsivity. Scientific Reports. https://doi.org/10.1038/s41598-017-06116-z

Allen, M., The foundation: Mechanism, prediction, and falsification in Bayesian enactivism. Comment on “Answering Schrödinger’s question: A free-energy formulation” by Maxwell James Désormeau Ramstead et al. (2018). Physics of Life Reviews. https://doi.org/10.1016/j.plrev.2018.01.007

Marshall, C. R., Hardy, C. J., Allen, M. , Russell, L. L., Clark, C. N., Bond, R. L., Dick, K. M., Brotherhood, E. V., Rohrer, J. D., Kilner, J. M. and Warren, J. D. (2018), Cardiac responses to viewing facial emotion differentiate frontotemporal dementias. Ann Clin Transl Neurol, 5: 687-696. doi:10.1002/acn3.563

2017

*Hauser, T. U., *Allen, M., Purg, N., Moutoussis, M., Rees, G., & Dolan, R. J. (2017). Noradrenaline blockade specifically enhances metacognitive performance. eLife, 6, e24901. https://doi.org/10.7554/eLife.24901

*Allen, M., *Frank, D., Glen, J. C., Fardo, F., Callaghan, M. F., & Rees, G. (2017). Insula and somatosensory cortical myelination and iron markers underlie individual differences in empathy. Scientific Reports, 7, 43316. https://doi.org/10.1038/srep43316

Allen, M., Glen, J. C., Müllensiefen, D., Schwarzkopf, D. S., Fardo, F., Frank, D., … Rees, G. (2017). Metacognitive ability correlates with hippocampal and prefrontal microstructure. NeuroImage, 149, 415–423. https://doi.org/10.1016/j.neuroimage.2017.02.008

Fardo, F., Vinding, M. C., Allen, M., Jensen, T. S., & Finnerup, N. B. (2017). Delta and gamma oscillations in operculo-insular cortex underlie innocuous cold thermosensation. Journal of Neurophysiology. https://doi.org/10.1152/jn.00843.2016

Fardo, F., Auksztulewicz, R., Allen, M., Dietz, M. J., Roepstorff, A., & Friston, K. J. (2017). Expectation violation and attention to pain jointly modulate neural gain in somatosensory cortex. NeuroImage. https://doi.org/10.1016/j.neuroimage.2017.03.041

Kumar, S., Tansley-Hancock, O., Sedley, W., Winston, J. S., Callaghan, M. F., Allen, M., … Griffiths, T. D. (2017). The Brain Basis for Misophonia. Current Biology, 27(4), 527–533. https://doi.org/10.1016/j.cub.2016.12.048

2016

Allen, M., Frank, D., Schwarzkopf, D. S., Fardo, F., Winston, J. S., Hauser, T. U., & Rees, G. (2016). Unexpected arousal modulates the influence of sensory noise on confidence. eLife, 5, e18103. https://doi.org/10.7554/eLife.18103

Gallagher, S., & Allen, M. (2016). Active inference, enactivism and the hermeneutics of social cognition. Synthese, 1–22. https://doi.org/10.1007/s11229-016-1269-8

Allen, M., Fardo, F., Dietz, M. J., Hillebrandt, H., Friston, K. J., Rees, G., & Roepstorff, A. (2016). Anterior insula coordinates hierarchical processing of tactile mismatch responses. NeuroImage, 127, 34–43. https://doi.org/10.1016/j.neuroimage.2015.11.030

Allen, M., & Friston, K. J. (2016). From cognitivism to autopoiesis: towards a computational framework for the embodied mind. Synthese, 1–24. https://doi.org/10.1007/s11229-016-1288-5

2011 – 2015

Fardo, F., Allen, M., Jegindø, E.-M. E., Angrilli, A., & Roepstorff, A. (2015). Neurocognitive evidence for mental imagery-driven hypoalgesic and hyperalgesic pain regulation. NeuroImage, 120, 350–361. https://doi.org/10.1016/j.neuroimage.2015.07.008

Allen, M., Smallwood, J., Christensen, J., Gramm, D., Rasmussen, B., Gaden Jensen, C., … Lutz, A. (2013). The balanced mind: the variability of task-unrelated thoughts predicts error-monitoring. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00743

Allen, M., Dietz, M., Blair, K. S., Beek, M. van, Rees, G., Vestergaard-Poulsen, P., … Roepstorff, A. (2012). Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention. Journal of Neuroscience, 32(44), 15601–15610. https://doi.org/10.1523/JNEUROSCI.2957-12.2012

Tylen, K., Allen, M., Hunter, B. K., & Roepstorff, A. (2012). Interaction vs. observation: distinctive modes of social cognition in human brain and behavior? A combined fMRI and eye-tracking study. Frontiers in Human Neuroscience, 6. https://doi.org/10.3389/fnhum.2012.00331

Ma, Y., Bang, D., Wang, C., Allen, M., Frith, C., Roepstorff, A., & Han, S. (2012). Sociocultural patterning of neural activity during self-reflection. Social Cognitive and Affective Neuroscience, nss103. https://doi.org/10.1093/scan/nss103

Allen, M., & Williams, G. (2011). Consciousness, Plasticity, and Connectomics: The Role of Intersubjectivity in Human Cognition. Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00020

Mental Training and Neuroplasticity – PhD Complete!

I was asked to write a brief summary of my PhD research for our annual CFIN report. I haven’t blogged in a while and it turned out to be a decent little blurb, so I figured I might as well share it here. Enjoy!

In the past decade, reports concerning the natural plasticity of the human brain have taken a spotlight in the media and popular imagination. In the pursuit of neural plasticity nearly every imaginable specialization, from taxi drivers to Buddhist monks, has had their day in the scanner. These studies reveal marked functional and structural neural differences between various populations of interest, and in doing so drive a wave of interest in harnessing the brain’s plasticity for rehabilitation, education, and even increasing intelligence (Green and Bavelier, 2008). Under this new “mental training” research paradigm investigators are now examining what happens to brain and behavior when novices are randomized to a training condition, using longitudinal brain imaging.

Image1_training

These studies highlight a few promising domains for harnessing neural plasticity, particularly in the realm of visual attention, cognitive control, and emotional training. By randomizing novices to a brief ‘dose’ of action video game or meditation training, researchers can go beyond mere cross-section and make inferences regarding the causality of training on observed neural outcomes. Initial results are promising, suggesting that domains of great clinical relevance such as emotional and attentional processing are amenable to training (Lutz et al., 2008a; Lutz et al., 2008b; Bavelier et al., 2010). However, these findings are currently obscured by a host of methodological limitations.

These span from behavioral confounds (e.g. motivation and demand characteristic) to inadequate longitudinal processing of brain images, which present particular challenges not found in within-subject or cross-sectional design (Davidson, 2010; Jensen et al., 2011). The former can be addressed directly by careful construction of “active control” groups. Here both comparison and control groups receive putatively effective treatments, carefully designed to isolate the hypothesized “active-ingredients” involved in behavioral and neuroplasticity outcomes. In this way researchers can simultaneously make inferences in terms of mechanistic specificity while excluding non-specific confounds such as social support, demand, and participant motivation.

image2_meditationbrainWe set out to investigate one particularly popular intervention, mindfulness meditation, while controlling for these factors. Mindfulness meditation has enjoyed a great deal of research interest in recent years. This popularity is largely due to promising findings indicating good efficacy of meditation training (MT) for emotion processing and cognitive control (Sedlmeier et al., 2012). Clinical studies indicate that MT may be particularly effective for disorders that are typically non-responsive to cognitive-behavioral therapy, such as severe depression and anxiety (Grossman et al., 2004; Hofmann et al., 2010). Understanding the neural mechanism underlying such benefits remains difficult however, as most existing investigations are cross-sectional in nature or depend upon inadequate “wait-list” passive control groups.

We addressed these difficulties in an investigation of functional and structural neural plasticity before and after a 6-week active-controlled mindfulness intervention. To control demand, social support, teacher enthusiasm, and participant motivation we constructed a “shared reading and listening” active control group for comparison to MT. By eliciting daily “experience samples” regarding participants’ motivation to practice and minutes practiced, we ensured that groups did not differ on common motivational confounds.

We found that while both groups showed equivalent improvement on behavioral response-inhibition and meta-cognitive measures, only the MT group significantly reduced affective-Stroop conflict reaction times (Allen et al., 2012). Further we found that MT participants show significantly greater increases in recruitment of dorsolateral prefrontal cortex than did controls, a region implicated in cognitive control and working memory. Interestingly we did not find group differences in emotion-related reaction times or BOLD activity; instead we found that fronto-insula and medial-prefrontal BOLD responses in the MT group were significantly more correlated with practice than in controls. These results indicate that while brief MT is effective for training attention-related neural mechanisms, only participants with the greatest amount of practice showed altered neural responses to negative affective stimuli. This result is important because it underlines the differential response of various target skills to training and suggests specific applications of MT depending on time and motivation constraints.

MT related increase in DLPFC activity during affective stroop task.
MT related increase in DLPFC activity during affective stroop task.

In a second study, we utilized a longitudinally optimized pipeline to assess structural neuroplasticity in the same cohort as described above (Ashburner and Ridgway, 2012). A crucial issue in longitudinal voxel-based morphometry and similar methods is the prevalence of “asymmetrical preprocessing”, for example where normalization parameters are calculated from baseline images and applied to follow-up images, resulting in inflated risk of false-positive results. We thus applied a totally symmetrical deformation-based morphometric pipeline to assess training related expansions and contractions of gray matter volume. While we found significant increases within the MT group, these differences did not survive group-by-time comparison and thus may represent false positives; it is likely that such differences would not be ruled out by an asymmetric pipeline or non-active controlled designed. These results suggest that brief MT may act only on functional neuroplasticity and that greater training is required for more lasting anatomical alterations.

These projects are a promising advance in our understanding of neural plasticity and mental training, and highlight the need for careful methodology and control when investigating such phenomena. The investigation of neuroplasticity mechanisms may one day revolutionize our understanding of human learning and neurodevelopment, and we look forward to seeing a new wave of carefully controlled investigations in this area.

You can read more about the study in this blog post, where I explain it in detail. 

A happy day, my PhD defense!
A happy day, my PhD defense!

References

Allen M, Dietz M, Blair KS, van Beek M, Rees G, Vestergaard-Poulsen P, Lutz A, Roepstorff A (2012) Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention. The Journal of Neuroscience 32:15601-15610.

Ashburner J, Ridgway GR (2012) Symmetric diffeomorphic modeling of longitudinal structural MRI. Frontiers in neuroscience 6.

Bavelier D, Levi DM, Li RW, Dan Y, Hensch TK (2010) Removing brakes on adult brain plasticity: from molecular to behavioral interventions. The Journal of Neuroscience 30:14964-14971.

Davidson RJ (2010) Empirical explorations of mindfulness: conceptual and methodological conundrums. Emotion 10:8-11.

Green C, Bavelier D (2008) Exercising your brain: a review of human brain plasticity and training-induced learning. Psychology and Aging; Psychology and Aging 23:692.

Grossman P, Niemann L, Schmidt S, Walach H (2004) Mindfulness-based stress reduction and health benefits: A meta-analysis. Journal of Psychosomatic Research 57:35-43.

Hofmann SG, Sawyer AT, Witt AA, Oh D (2010) The effect of mindfulness-based therapy on anxiety and depression: A meta-analytic review. Journal of consulting and clinical psychology 78:169.

Jensen CG, Vangkilde S, Frokjaer V, Hasselbalch SG (2011) Mindfulness training affects attention—or is it attentional effort?

Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008a) Regulation of the neural circuitry of emotion by compassion meditation: effects of meditative expertise. PLoS One 3:e1897.

Lutz A, Slagter HA, Dunne JD, Davidson RJ (2008b) Attention regulation and monitoring in meditation. Trends Cogn Sci 12:163-169.

Sedlmeier P, Eberth J, Schwarz M, Zimmermann D, Haarig F, Jaeger S, Kunze S (2012) The psychological effects of meditation: A meta-analysis.

Could a papester button irreversibly break down the research paywall?

A friend just sent me the link to the Aaron Swartz Memorial JSTOR liberator. We started talking about it and it led to a pretty interesting idea.

As soon as I saw this it clicked: we need papester. We need a simple browser plugin that can recognize, download and re-upload any research document automatically (think zotero) to BitTorrent (this was Aaron’s original idea, just crowdsourced). These would then be automatically turned into torrents with an associated magnet link. The plugin would interact with a lightweight torrent client, using a set limit of your bandwidth (say 5%) to constantly seed back any files you have in your (zotero) library folder. Also, it would automatically use part of the bandwidth to seed missing papers (first working through a queue of DOIs of papers that were searched for by others and then just for any missing paper in reverse chronological order), so that over time all papers would be on BitTorrent. The links would be archived by google; any search engine could then find them and the plug-in would show the PDF download link.

Once this system is in place, a pirate-bay/reddit mash-up could help sort the magnet links as a meta-data rich papester torrent tracker. Users could posts comments and reviews, which would themselves be subject to karma. Over time a sorting algorithm could give greater weight to reviews from authors who consistently review unretracted papers, creating a kind of front page where “hot” would give you the latest research and “lasting” would give you timeless classics. Separating the sorting mechanism – which can essentially be any tracker – and the rating/meta-data system ensures that neither can be easily brought down. If users wish they could compile independent trackers for particular topics or highly rated papers, form review committees, and request new experiments to address flagged issues in existing articles. In this way we would ensure not only an everlasting and loss-protected research database, but irreversibly push academic publishing into an open-access and democratic review system. Students and people without access to scientific knowledge could easily find forgotten classics and the latest buzz with a simple sort. We need an “research-reddit” rating layer  – why not solve Open Access and peer review in one go?

Is this feasible? There are about 50 million papers in existence[1]. If we estimate about 500 kilobytes on average per paper, that’s 25 million MB of data, or  25 terabytes. While that may sound like a lot, remember that most torrent trackers already list much more data than this and that available bandwidth increases annually. If we can archive a ROM of every videogame created, why not papers? The entire collection of magnet links could take up as little as 1GB of data, making it easy to periodically back up the archive, ensure the system is resilient to take-downs, and re-seed less known or sought after papers. Just imagine it- all of our knowledge stored safely in an completely open collection, backed by the power of the swarm, organized by reviews, comments, and ratings, accessible to all. It would revolutionize the way we learn and share knowledge.

Of course there would be ruthless resistance to this sort of thing from publishers. It would be important to take steps to protect yourself, perhaps through TOR. The small size of the files would facilitate better encryption. When universities inevitably move to block uploads, tags could be used to later upload acquired files quickly on a public-wifi hotspot. There are other benefits as well- currently there are untold numbers of classic papers available online in reference only. What incentive is there for libraries to continue scanning these? A papester-backed uploader karma system could help bring thousands of these documents irreversibly into the fold. Even in the case that publishers found some way to stifle the system, as with Napster  the damage would be done. Just as we were pushed irrevocably towards new forms of music consumption – direct download, streaming, donate-to-listen – big publishers would be forced toward an open access model to recover costs. Finally such a system might move us closer to a self-publishing ARXIV model. In the case that you couldn’t afford open access, you could self-publish your own PDF to the system. User reviews and ratings could serve as a first layer of feedback for you to improve the article. The idea or data – with your name behind it – would be out there fast and free.

edit:

Another cool feature would be a DOI search. When a user searches for a paper that isn’t available, papster would automatically add that paper to a request queue.

edit2/disclaimer:

This is a thought experiment about an illegal solution and it’s possible consequences and benefits. Do with it what you will but recognize the gap between the theoretical and the actual!

1.
Arif Jinha (2010). Article 50 million: an estimate of the number of scholarly articles in existence Learned Publishing, 23 (3), 258-263 DOI: 10.1087/20100308 free pre-print available from author here