The return of neuroconscience

Hello everyone! After an amazing visit back home to Tampa Florida for VSS and a little R&R in Denmark i’m back and feeling better than ever. Some of you may have noticed that i’ve been on an almost 6 month blogging hiatus. I’m just going to come right out and admit that after moving from Denmark to London, I really wasn’t sure what direction I wanted to take my blog. Changing institutions is always a bit of a bewildering experience, and a wise friend once advised me that it’s sometimes best to quietly observe new surroundings before diving right in. I think I needed some time to get used to being a part of the awesomeness that is the Queen Square neuroimaging hub. I also needed some time to reflect on the big picture of my research, this blog, and my overall social media presence.

But fear not! After the horrors of settling into London, I’m finally comfortable in my skin again with a new flat, a home office almost ready, and lots and lots of new ideas to share with you. I think part of my overall hesitancy was a kind of pondering just what I should be sharing. But I didn’t get this far by bottling up my research, so there isn’t much point in shuttering myself in now! I expect to be back to blogging in full form in the next week, as new projects here begin to get underway. But where is my research going?

The big picture will largely remain the same. I am interested as always in human consciousness, thought, self-awareness, and our capacity for growth along these dimensions. One thing I really love about my post-doc is that I’ve finally found a kind of thread weaving throughout my research all the way back to the days when I collected funny self-narratives in a broom closet at UCF. I think you could say I’m trying to connect the dots between how dynamic bodies shape and interact with our reflective minds, using the tools of perceptual decision making, predictive coding, and neuroimaging. Currently i’m developing a variety of novel experimental paradigms examining embodied self-awareness (i.e. our somatosensory, interoceptive, and affective sense of self), perceptual decision making and metacognition, and interrelations between these. You can expect to hear more about these topics soon.

Indeed, a principle reason I chose to join the FIL/ICN team was the unique emphasis and expertise here on predictive coding. My research has always been united by an interest in growth, plasticity, and change. During my PhD I came to see predictive coding/free energy schemes as a unifying framework under-which to unite our understanding of embodied and neural computation in terms of our ability to learn from new experiences.  As such I’m very happy to be in a place where not only can I be on the cutting edge of theoretical development, but also receive first-hand training in applying the latest computational modelling, connectivity, and multi-modal imaging techniques to my research questions. As always, given my obvious level of topical ADHD, you can be sure to expect coverage of a wide-range of cogneuro and cogsci topics.

So in general, you can expect posts covering these topics, my upcoming results, and general musings along these lines. As always i’m sure there will be plenty of methodsy nitpicking and philosophical navel gathering. In particular, my recent experience with a reviewer insisting that ’embodiment’ = interoception has me itching to fire off a theoretical barrage – but I guess I should wait to publish that paper before taking to the streets. In the near future I have planned a series of short posts covering some of the cool posters and general themes I observed at the Vision Sciences Society conference this fall.

Finally, for my colleagues working on mindfulness and meditation research, a brief note. As you can probably gather, I don’t intend to return to this domain of study in the near future. My personal opinion of that topic is that it has become incredibly overhyped and incestous- the best research simply isn’t rising to the top. I know that many of the leaders in that community are well aware of that problem and are working to correct it, but for me I knew it was time to part ways and return to more general research. I do believe that mindfulness has an important role to play in both self-awareness and well-being, and hope that the models I am currently developing might one day further refine our understanding of these practices. However, I guess it’s worth noting that for me, meditation was always more of a kind of Varellian way to manipulate plasticity and consciousness rather than an end in itself; as I no longer buy into the enactive/neurophenomenological paradigm, I guess it’s self explanatory that I would be moving on to other things (like actual consciousness studies! :P). I do hope to see that field continue to grow and mature, and look forward to fruitful collaborations along those lines.

 

That’s it folks! Prepare yourself for a new era of neuroconscience 🙂 Cheers to an all new year, all new research, and new directions! Viva la awareness!

 

 

Birth of a New School: PDF version and Scribus Template!

As promised, today we are releasing a copy-edited PDF of my “Birth of a New School” essay, as well as a Scribus template that anyone can use to quickly create their own professional quality PDF manuscripts. Apologies for the lengthy delay, as i’ve been in the middle of a move to the UK. We hope folks will iterate and optimize these templates for a variety of purposes, especially post-publication peer review, commentary, pre-registration, and more. Special thanks to collaborator Kate Mills, who used Scribus to create the initial layout. You might notice we deliberately styled the manuscript around the format of one of those Big Sexy Journals (see if you can guess which one). I’ve heard this elaborate process should cost somewhere in the tens of thousands of dollars per article, so I guess I owe Kate a few lunches! Seriously though, the entire copy-editing and formatting process only took about 3 or 4 hours total (most of which was just getting used to the Scribus interface), less than the time you would spend formatting and reformatting your article for a traditional publisher. With a little practice Scribus or similar tools can be used to quickly turn out a variety of high quality article types.

Here is the article on Figshare, and the direct download link:

Screen Shot 2013-12-12 at 11.50.42
The formatted manuscript. Easy!

What do you think? Personally, I’m really pleased with it! We’ve also gone ahead and uploaded the Scribus template to Figshare. You can use this to easily publish your own post-publication peer reviews, commentaries, and whatever else you like. Just copy-paste your own text into the text fields, replace the images, upload to Figshare or a similiar service, and you are good to go! In general Scribus is a really awesome open source tool for publishing, both easy to learn and cross platform. Another great alternative is Fidus. For now we’re still not exactly sure how to generate citations – in theory if you format your manuscripts according to these guidelines, Google Scholar will pick them up anywhere on the net and generate alerts. For now we are recommending everyone upload their self-publications to Figshare or a similar service, who are already working on a streamlined citation generation scheme. We hope you find these useful; now go out and publish some research!

The template:

An easy to use Scribus template for self-publishing
Our Scribus template, for quick creation of research proofs.

Birth of a New School: How Self-Publication can Improve Research

Edit: click here for a PDF version and citable figshare link!

Preface: What follows is my attempt to imagine a radically different future for research publishing. Apologies for any overlooked references – the following is meant to be speculative and purposely walks the line between paper and blog post. Here is to a productive discussion regarding the future of research.

Our current systems of producing, disseminating, and evaluating research could be substantially improved. For-profit publishers enjoy extremely high taxpayer-funded profit margins. Traditional closed-door peer review is creaking under the weight of an exponentially growing knowledge base, delaying important communications and often resulting in seemingly arbitrary publication decisions1–4. Today’s young researchers are frequently dismayed to find their pain-staking work producing quality reviews overlooked or discouraged by journalistic editorial practices. In response, the research community has risen to the challenge of reform, giving birth to an ever expanding multitude of publishing tools: statistical methods to detect p-hacking5, numerous open-source publication models6–8, and innovative platforms for data and knowledge sharing9,10.

While I applaud the arrival and intent of these tools, I suspect that ultimately publication reform must begin with publication culture – with the very way we think of what a publication is and can be. After all, how can we effectively create infrastructure for practices that do not yet exist? Last summer, shortly after igniting #pdftribute, I began to think more and more about the problems confronting the publication of results. After months of conversations with colleagues I am now convinced that real reform will come not in the shape of new tools or infrastructures, but rather in the culture surrounding academic publishing itself. In many ways our current publishing infrastructure is the product of a paper-based society keen to produce lasting artifacts of scholarly research. In parallel, the exponential arrival of networked society has lead to an open-source software community in which knowledge is not a static artifact but rather an ever-expanding living document of intelligent productivity. We must move towards “research 2.0” and beyond11.

From Wikipedia to Github, open-source communities are changing the way knowledge is produced and disseminated. Already this movement has begun reach academia, with researchers across disciplines flocking to social media, blogs, and novel communication infrastructures to create a new movement of post-publication peer review4,12,13. In math and physics, researchers have already embraced self-publication, uploading preprints to the online repository arXiv, with more and more disciplines using the site to archive their research. I believe that the inevitable future of research communication is in this open-source metaphor, in the form of pervasive self-publication of scholarly knowledge. The question is thus not where are we going, but rather how do we prepare for this radical change in publication culture. In asking these questions I would like to imagine what research will look like 10, 15, or even 20 years from today. This post is intended as a first step towards bringing to light specific ideas for how this transition might be facilitated. Rather than this being a prescriptive essay, here I am merely attempting to imagine what that future may look like. I invite you to treat what follows as an ‘open beta’ for these ideas.

Part 1: Why self-publication?

I believe the essential metaphor is within the open-source software community. To this end over the past few months I have  feverishly discussed the merits and risks of self-publishing scholarly knowledge with my colleagues and peers. While at first I was worried many would find the notion of self-publication utterly absurd, I have been astonished at the responses – many have been excitedly optimistic! I was surprised to find that some of my most critical and stoic colleagues have lost so much faith in traditional publication and peer review that they are ready to consider more radical options.

The basic motivation for research self-publication is pretty simple: research papers cannot be properly evaluated without first being read. Now, by evaluation, I don’t mean for the purposes of hiring or grant giving committees. These are essentially financial decisions, e.g. “how do I effectively spend my money without reading the papers of the 200+ applicants for this position?” Such decisions will always rely on heuristics and metrics that must necessarily sacrifice accuracy for efficiency. However, I believe that self-publication culture will provide a finer grain of metrics than ever dreamed of under our current system. By documenting each step of the research process, self-publication and open science can yield rich information that can be mined for increasingly useful impact measures – but more on that later.

When it comes to evaluating research, many admit that there is no substitute for opening up an article and reading its content – regardless of journal. My prediction is, as post-publication peer review gains acceptance, some tenured researcher or brave young scholar will eventually decide to simply self-publish her research directly onto the internet, and when that research goes viral, the resulting deluge of self-publications will be overwhelming. Of course, busy lives require heuristic decisions and it’s arguable that publishers provide this editorial service. While I will address this issue specifically in Part 3, for now I want to point out that growing empirical evidence suggests that our current publisher/impact-based system provides an unreliable heuristic at best14–16. Thus, my essential reason for supporting self-publication is that in the worst-case scenario, self-publications must be accompanied by the disclaimer: “read the contents and decide for yourself.” As self-publishing practices are established, it is easy to imagine that these difficulties will be largely mitigated by self-published peer reviews and novel infrastructures supporting these interactions.

Indeed, with a little imagination we can picture plenty of potential benefits of self-publication to offset the risk that we might read poor papers. Researchers spend exorbitant amounts of their time reviewing, commenting on, and discussing articles – most of that rich content and meta-data is lost under the current system. In documenting the research practice more thoroughly, the ensuing flood of self-published data can support new quantitative metrics of reviewer trust, and be further utlized in the development of rich information about new ideas and data in near real-time. To give just one example, we might calculate how many subsequent citations or retractions a particular reviewer generates, generating a reviewer impact factor and reliability index. The more aspects of research we publish, the greater the data-mining potential. Incentivizing in-depth reviews that add clarity and conceptual content to research, rather than merely knocking down or propping up equally imperfect artifacts, will ultimately improve research quality. By self-publishing well-documented, open-sourced pilot data and accompanying digital reagents (e.g. scripts, stimulus materials, protocols, etc), researchers can get instant feedback from peers, preventing uncounted research dollars from being wasted. Previously closed-door conferences can become live records of new ideas and conceptual developments as they unfold. The metaphor here is research as open-source – an ever evolving, living record of knowledge as it is created.

Now, let’s contrast this model to the current publishing system. Every publisher (including open-access) obliges researchers to adhere to randomly varied formatting constraints, presentation rules, submission and acceptance fees, and review cultures. Researchers perform reviews for free for often publically subsidized work, so that publishers can then turn around and sell the finished product back to those same researchers (and the public) at an exorbitant mark-up. These constraints introduce lengthy delays – ranging from 6+ months in the sciences all the way up to two years in some humanities disciplines. By contrast, how you self-publish your research is entirely up to you – where, when, how, the formatting, and the openness. Put simply, if you could publish your research how and when you wanted, and have it generate the same “impact” as traditional venues, why would you use a publisher at all?

One obvious reason to use publishers is copy-editing, i.e. the creation of pretty manuscripts. Another is the guarantee of high-profile distribution. Indeed, under the current system these are legitimate worries. While it is possible to produce reasonably formatted papers, ideally the creation of an open-source, easy to use copy-editing software is needed to facilitate mainstream self-publication. Innovators like figshare are already leading the way in this area. In the next section, I will try to theorize some different ways in which self-publication can overcome these and other potential limitations, in terms of specific applications and guidelines for maximizing the utility of self-published research. To do so, I will outline a few specific cases with the most potential for self-publication to make a positive impact on research right away, and hopefully illuminate the ‘why’ question a bit further with some concrete examples.

 Part 2: Where to begin self-publishing

What follows is the “how-to” part of this document. I must preface by saying that although I have written so far with researchers across the sciences and humanities in mind, I will now focus primarily on the scientific examples with which I am more experienced.  The transition to self-publication is already happening in the forms of academic tweets, self-archives, and blogs, at a seemingly exponential growth rate. To be clear, I do not believe that the new publication culture will be utopian. As in many human endeavors the usual brandism3, politics, and corruption can be expected to appear in this new culture. Accordingly, the transition is likely to be a bit wild and woolly around the edges. Like any generational culture shift, new practices must first emerge before infrastructures can be put in place to support them. My hope is to contribute to that cultural shift from artifact to process-based research, outlining particularly promising early venues for self-publication. Once these practices become more common, there will be huge opportunities for those ready and willing to step in and provide rich informational architectures to support and enhance self-publication – but for now we can only step into that wild frontier.

In my discussions with others I have identified three particularly promising areas where self-publication is either already contributing or can begin contributing to research. These are: the publication of exploratory pilot-data, post-publication peer reviews, and trial pre-registration. I will cover each in turn, attempting to provide examples and templates where possible. Finally, Part 3 will examine some common concerns with self-publication. In general, I think that successful reforms should resemble existing research practices as much as possible: publication solutions are most effective when they resemble daily practices that are already in place, rather than forcing individuals into novel practices or infrastructures with an unclear time-commitment. A frequent criticism of current solutions such as the comments section on Frontiers, PLOS One, or the newly developed PubPeer, is that they are rarely used by the general academic population. It is reasonable to conclude that this is because already over-worked academics currently see little plausible benefit from contributing to these discussions given the current publishing culture (worse still, they may fear other negative repercussions, discussed in Part 3). Thus a central theme of the following examples is that they attempt to mirror practices in which many academics are already engaged, with complementary incentive structures (e.g. citations).

Example 1: Exploratory Pilot Data 

This previous summer witnessed a fascinating clash of research cultures, with the eruption of intense debate between pre-registration advocates and pre-registration skeptics. I derived some useful insights from both sides of that discussion. Many were concerned about what would happen to exploratory data under these new publication regimes. Indeed, a general worry with existing reform movements is that they appear to emphasize a highly conservative and somewhat cynical “perfect papers” culture. I do not believe in perfect papers – the scientific model is driven by replication and discovery. No paper can ever be 100% flawless – otherwise there would be no reason for further research! Inevitably, some will find ways to cheat the system. Accordingly, reform must incentivize better reporting practices over stricter control, or at least balance between the two extremes.

Exploratory pilot data is an excellent avenue for this. By their very nature such data are not confirmatory – they are exciting in that they do not conform well to prior predictions. Such data benefit from rapid communication and feedback. Imagine an intuition-based project – a side or pet project conducted on the fly for example. The researcher might feel that the project has potential, but also knows that there could be serious flaws. Most journals won’t publish these kinds of data. Under the current system these data are lost, hidden, obscured, or otherwise forgotten.

Compare to a self-publication world: the researcher can upload the data, document all the protocols, make the presentation and analysis scripts open-source, and provide some well-written documentation explaining why she thinks the data are of interest. Some intrepid graduate student might find it, and follow up with a valuable control analysis, pointing out an excellent feature or fatal flaw, which he can then upload as a direct citation to the original data. Both publications are citable, giving credit to originator and reviewer alike. Armed with this new knowledge, the original researcher could now pre-register an altered protocol and conduct a full study on the subject (or alternatively, abandon the project entirely). In this exchange, it is likely that hundreds of hours and research dollars will have been saved. Additionally, the entire process will have been documented, making it both citable and minable for impact metrics. Tools already exist for each of these steps – but largely cultural fears prevent it from happening. How would it be perceived? Would anyone read it? Will someone steal my idea? To better frame these issues, I will now examine a self-publication practice that has already emerged in force.

 Example 2: Post-publication peer review

This is a particularly easy case, precisely because high-profile scholars are already regularly engaged in the practice. As I’ve frequently joked on twitter, we’re rapidly entering an era where publishing in a glam-mag has no impact guarantee if the paper itself isn’t worthwhile – you may as well hang a target on your head for post-publication peer reviewers. However, I want to emphasize the positive benefits and not just the conservative controls. Post-publication peer review (PPPR) has already begun to change the way we view research, with reviewers adding lasting content to papers, enriching the conclusions one can draw, and pointing out novel connections that were not extrapolated upon by the authors themselves. Here I like to draw an analogy to the open source movement, where code (and its documentation) is forkable, versioned, and open to constant revision – never static but always evolving.

Indeed, just last week PubMed launched their new “PubMed Commons” system, an innovative PPPR comment system, whereby any registered person (with at least one paper on PubMed) can leave scientific comments on articles.  Inevitably, the reception on twitter and Facebook mirrored previous attempts to introduce infrastructure-based solutions – mixed excitement followed by a lot of bemused cynicism – bring out the trolls many joked. To wit, a brief scan of the average comment on another platform, PubPeer, revealed a generally (but not entirely) poor level of comment quality. While many comments seem to be on topic, most had little to no formatting and were given with little context. At times comments can seem trollish, pointing out minor flaws as if they render the paper worthless. In many disciplines like my own, few comments could be found at all. This compounds the central problem with PPPR; why would anyone acknowledge such a system if the primary result is poorly formed nitpicking of your research? The essential problem here is again incentive – for reviews to be quality there needs to be incentive. We need a culture of PPPR that values positive and negative comments equally. This is common to both traditional and self-publication practices.

To facilitate easy, incentivized self-publication of comments and PPPRs, my colleague Hauke Hillebrandt and I have attempted to create a simple template that researchers can use to quickly and easily publish these materials. The idea is that by using these templates and uploading them to figshare or similar services, Google Scholar will automatically index them as citations, provide citation alerts to the original authors, and even include the comments in its h-index calculation. This way researchers can begin to get credit for what they are already doing, in an easy to use and familiar format. While the template isn’t quite working yet (oddly enough, Scholar is counting citations from my blog, but not the template), you can take a look at it here and maybe help us figure out why it isn’t working! In the near future we plan to get this working, and will follow-up this post with the full template, ready for you to use.

Example 3: Pre-registration of experimental trials

As my final example, I suggest that for many researchers, self-publication of trial pre-registrations (PR) may be an excellent way to test the waters of PR in a format with a low barrier to entry. Replication attempts are a particularly promising venue for PR, and self-publication of such registrations is a way to quickly move from idea to registration to collection (as in the above pilot data example), while ensuring that credit for the original idea is embedded in the infamously hard to erase memory of the internet.

A few benefits of PR self-publication, rather than relying on for-profit publishers, is that PR templates can be easily open-sourced themselves, allowing various research fields to generate community-based specialized templates adhering to the needs of that field. Self-published PRs, as well as high quality templates, can be cited – incentivizing the creation and dissemination of both. I imagine the rapid emergence of specialized templates within each community, tailored to the needs of that research discipline.

Part 3: Criticism and limitations

Here I will close by considering some common concerns with self-publication:

Quality of data

A natural worry at this point is quality control. How can we be sure that what is published without the seal of peer review isn’t complete hooey? The primary response is that we cannot, just like we cannot be sure that peer reviewed materials are quality without first reading them ourselves. Still, it is for this reason that I tried to suggest a few particularly ripe venues for self-publication of research. The cultural zeitgeist supporting full-blown scholarly self-publication has not yet arrived, but we can already begin to prepare for it. With regards to filtering noise, I argue that by coupling post-publication peer review and social media, quality self-publications will rise to the top. Importantly, this issue points towards flaws in our current publication culture. In many research areas there are effects that are repeatedly published but that few believe, largely due to the presence of biases against null-findings. Self-publication aims to make as much of the research process publicly available as possible, preventing this kind of knowledge from slipping through the editorial cracks and improving our ability to evaluate the veracity of published effects. If such data are reported cleanly and completely, existing quantitative tools can further incorporate them to better estimate the likelihood of p-hacking within a literature. That leads to the next concern – quality of presentation.

Hemingway's thoughts on data.

Quality of presentation

Many ask: how in this brave new world will we separate signal from noise? I am sure that every published researcher already receives at least a few garbage citations a year from obscure places in obscure journals with little relevance to actual article contents. But, so the worry goes, what if we are deluged with a vast array of poorly written, poorly documented, self-published crud. How would we separate the signal from the noise?

 The answer is Content, Presentation, and Clarity. These must be treated as central guidelines for self-publication to be worth anyone’s time. The Internet memesphere has already generated one rule for ranking interest: content rules. Content floats and is upvoted, blogspam sinks and is downvoted. This is already true for published articles – twitter, reddit, facebook, and email circles help us separate the wheat from the chaff at least as much as impact factor if not more. But presentation and clarity are equally important. Poorly conducted research is not shared, or at least is shared with vehemence. Similarly, poorly written self-publications, or poorly documented data/reagents are unlikely to generate positive feedback, much less impact-generating eyeballs. I like to imagine a distant future in which self-publication has given rise to a new generation of well-regarded specialists: reviewers who are prized for their content, presentation, and clarity; coders who produce cleanly documented pipelines; behaviorists producing powerful and easily customized paradigm scripts; and data collection experts who produce the smoothest, cleanest data around. All of these future specialists will be able to garner impact for the things they already do, incentivizing each step of the research processes rather than only the end product.

Being scooped, intellectual credit

Another common concern is “what if my idea/data/pilot is scooped?” I acknowledge that particularly in these early days, the decision to self-publish must be weighted against this possibility. However, I must also point out that in the current system authors must also weight the decision to develop an idea in isolation against the benefits of communicating with peers and colleagues. Both have risks and benefits – an idea or project in isolation can easily over-estimate its own quality or impact. The decision to self-publish must similarly be weighted against the need for feedback. Furthermore, a self-publication culture would allow researchers to move more quickly from project to publication, ensuring that they are readily credited for their work. And again, as research culture continues to evolve, I believe this concern will increasingly fade. It is notoriously difficult to erase information from The Internet (see the “Streisand effect”) – there is no reason why self-published ideas and data cannot generate direct credit for the authors. Indeed, I envision a world in which these contributions can themselves be independently weighted and credited.

 Prevention of cheating, corruption, self-citations

To some, this will be an inevitable point of departure. Without our time-tested guardian of peer review, what is to prevent a flood of outright fabricated data? My response is: what prevents outright fabrication under the current system? To misquote Jeff Goldblum in Jurassic Park, cheaters will always find a way. No matter how much we tighten our grip, there will be those who respond to the pressures of publication by deliberate misconduct. I believe that the current publication system directly incentivizes such behavior by valuing end product over process. By creating incentives for low-barrier post-publication peer review, pre-registration, and rich pilot data publication, researchers are given the opportunity to generate impact for each step of the research process. When faced with the vast penalties of cheating due to a null finding, versus doing one’s best to turn those data into something useful for someone, I suspect most people will choose the honest and less risky option.

 Corruption and self-citations are perhaps a subtler, more sinister factor. In my discussions with colleagues, a frequent concern is that there is nothing to prevent high-impact “rich club” institutions from banding together to provide glossy post-publication reviews, citation farming, or promoting one another’s research to the top of the pile regardless of content. I again answer: how is this any different from our current system? Papers are submitted to an editor who makes a subjective evaluation of the paper’s quality and impact, before sending it to four out of a thousand possible reviewers who will make an obscure  decision about the content of the paper. Sometimes this system works well, but increasingly it does not2. Many have witnessed great papers rejected for political reasons, or poor ones accepted for the same. Lowering the barrier to post-publication peer review means that even when these factors drive a paper to the top, it will be far easier to contextualize that research with a heavy dose of reality. Over time, I believe self-publication will incentivize good research. Cheating will always be a factor – and this new frontier is unlikely to be a utopia. Rather, I hope to contribute to the development of a bridge between our traditional publishing models and a radically advanced not-too-distant future.

Conclusion

Our current systems of producing, disseminating, and evaluating research increasingly seem to be out of step with cultural and technological realities. To take back the research process and bolster the ailing standard of peer-review I believe research will ultimately adopt an open and largely publisher-free model. In my view, these new practices will be entirely complementary to existing solutions including such as the p-curve5, open-source publication models6–8, and innovative platforms for data and knowledge sharing such as PubPeer, PubMed Commons, and figshare9,10. The next step from here will be to produce useable templates for self-publication. You can expect to see a PDF version of this post in the coming weeks as a further example of self-publishing practices. In attempting to build a bridge to the coming technological and social revolution, I hope to inspire others to join in the conversation so that we can improve all aspects of research.

 Acknowledgments

Thanks to Hauke Hillebrandt, Kate Mills, and Francesca Fardo for invaluable discussion, comments, and edits of this work. Many of the ideas developed here were originally inspired by this post envisioning a self-publication future. Thanks also to PubPeer, PeerJ,  figshare, and others in this area for their pioneering work in providing some valuable tools and spaces to begin engaging with self-publication practices.

Addendum

Excellent resources already exist for the many of the ideas presented here. I want to give special notice to researchers who have already begun self-publishing their work either as preprints, archives, or as direct blog posts. Parallel publishing is an attractive transitional option where researchers can prepublish their work for immediate feedback before submitting it to a traditional publisher. Special notice should be given to Zen Faulkes whose excellent pioneering blog posts demonstrated that it is reasonably easy to self-produce well formatted publications. Here are a few pioneering self-published papers you can use as examples – feel free to add your own in the comments:

The distal leg motor neurons of slipper lobsters, Ibacus spp. (Decapoda, Scyllaridae), Zen Faulkes

http://neurodojo.blogspot.dk/2012/09/Ibacus.html

Eklund, Anders (2013): Multivariate fMRI Analysis using Canonical Correlation Analysis instead of Classifiers, Comment on Todd et al. figshare.

http://dx.doi.org/10.6084/m9.figshare.787696

Automated removal of independent components to reduce trial-by-trial variation in event-related potentials, Dorothy Bishop

http://bishoptechbits.blogspot.dk/2011_05_01_archive.html

Deep Impact: Unintended consequences of journal rank

Björn Brembs, Marcus Munafò

http://arxiv.org/abs/1301.3748

A novel platform for open peer to peer review and publication:

http://thewinnower.com/

A platform for open PPPRs:

https://pubpeer.com/

Another PPPR platform:

http://f1000.com/

References

1. Henderson, M. Problems with peer review. BMJ 340, c1409 (2010).

2. Ioannidis, J. P. A. Why Most Published Research Findings Are False. PLoS Med 2, e124 (2005).

3. Peters, D. P. & Ceci, S. J. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behav. Brain Sci. 5, 187 (2010).

4. Hunter, J. Post-publication peer review: opening up scientific conversation. Front. Comput. Neurosci. 6, 63 (2012).

5. Simonsohn, U., Nelson, L. D. & Simmons, J. P. P-Curve: A Key to the File Drawer. (2013). at <http://papers.ssrn.com/abstract=2256237>

6.  MacCallum, C. J. ONE for All: The Next Step for PLoS. PLoS Biol. 4, e401 (2006).

7. Smith, K. A. The frontiers publishing paradigm. Front. Immunol. 3, 1 (2012).

8. Wets, K., Weedon, D. & Velterop, J. Post-publication filtering and evaluation: Faculty of 1000. Learn. Publ. 16, 249–258 (2003).

9. Allen, M. PubPeer – A universal comment and review layer for scholarly papers? | Neuroconscience on WordPress.com. Website/Blog (2013). at <http://neuroconscience.com/2013/01/25/pubpeer-a-universal-comment-and-review-layer-for-scholarly-papers/>

10. Hahnel, M. Exclusive: figshare a new open data project that wants to change the future of scholarly publishing. Impact Soc. Sci. blog (2012). at <http://eprints.lse.ac.uk/51893/1/blogs.lse.ac.uk-Exclusive_figshare_a_new_open_data_project_that_wants_to_change_the_future_of_scholarly_publishing.pdf>

11. Yarkoni, T., Poldrack, R. A., Van Essen, D. C. & Wager, T. D. Cognitive neuroscience 2.0: building a cumulative science of human brain function. Trends Cogn. Sci. 14, 489–496 (2010).

12. Bishop, D. BishopBlog: A gentle introduction to Twitter for the apprehensive academic. Blog/website (2013). at <http://deevybee.blogspot.dk/2011/06/gentle-introduction-to-twitter-for.html>

13. Hadibeenareviewer. Had I Been A Reviewer on WordPress.com. Blog/website (2013). at <http://hadibeenareviewer.wordpress.com/>

14. Tressoldi, P. E., Giofré, D., Sella, F. & Cumming, G. High Impact = High Statistical Standards? Not Necessarily So. PLoS One 8, e56180 (2013).

15.  Brembs, B. & Munafò, M. Deep Impact: Unintended consequences of journal rank. (2013). at <http://arxiv.org/abs/1301.3748>

16.  Eisen, J. A., Maccallum, C. J. & Neylon, C. Expert Failure: Re-evaluating Research Assessment. PLoS Biol. 11, e1001677 (2013).

http://wl.figshare.com/articles/875339/embed?show_title=1

Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

important update: Thanks to commenter “DS”, I discovered that my respiration-related data was strongly contaminated due to mechanical error. The belt we used is very susceptible to becoming uncalibrated, if the subject moves or breathes very deeply for example. When looking at the raw timecourse of respiration I could see that many subjects, included the one displayed here, show a great deal of “clipping” in the timeseries. For the final analysis I will not use the respiration regressors, but rather just the pulse and motion. Thanks DS!

As I’m working my way through my latest fMRI analysis, I thought it might be fun to share a little bit of that here. Right now i’m coding up a batch pipeline for data from my Varela-award project, in which we compared “adept” meditation practitioners with motivation, IQ, age, and gender-matched controls on a response-inhibition and error monitoring task. One thing that came up in the project proposal meeting was a worry that, since meditation practitioners spend so much time working with the breath, they might respirate differently either at rest or during the task. As I’ve written about before, respiration and other related physiological variables such as cardiac-pulsation induced motion can seriously impact your fMRI results (when your heart beats, the veins in your brain pulsate, creating slight but consistent and troublesome MR artifacts). As you might expect, these artifacts tend to be worse around the main draining veins of the brain, several of which cluster around the frontoinsular and medial-prefrontal/anterior cingulate cortices. As these regions are important for response-inhibition and are frequently reported in the meditation literature (without physiological controls), we wanted to try to control for these variables in our study.

disclaimer: i’m still learning about noise modelling, so apologies if I mess up the theory/explanation of the techniques used! I’ve left things a bit vague for that reason. See bottom of article for references for further reading. To encourage myself to post more of these “open-lab notes” posts, I’ve kept the style here very informal, so apologies for typos or snafus. 😀

To measure these signals, we used the respiration belt and pulse monitor that come standard with most modern MRI machines. The belt is just a little elastic hose that you strap around the chest wall of the subject, where it can record expansions and contractions of the chest to give a time series corresponding to respiration, and the pulse monitor a standard finger clip. Although I am not an expert on physiological noise modelling, I will do my best to explain the basic effects you want to model out of your data. These “non-white” noise signals include pulsation and respiration-induced motion (when you breath, you tend to nod your head just slightly along the z-axis), typical motion artifacts, and variability of pulsation and respiration. To do this I fed my physiological parameters into an in-house function written by Torben Lund, which incorporates a RETROICOR transformation of the pulsation and respiration timeseries. We don’t just use the raw timeseries due to signal aliasing- the phsyio data needs to be shifted to make each physiological event correspond to a TR. The function also calculates the respiratory volume time delay (RVT), a measure developed by Rasmus Birn, to model the variability in physiological parameters1. Variability in respiration and pulse volume (if one group of subjects tend to inhale sharply for some conditions but not others, for example) is more likely to drive BOLD artifacts than absolute respiratory volume or frequency (if one group of subjects tend to inhale sharply for some conditions but not others, for example). Finally, as is standard, I included the realignment parameters to model subject motion-related artifacts. Here is a shot of my monster design matrix for one subject:

DM_NVR

You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars, and finally my session offsets and constant. It’s a big DM, but since we have over 1000 degrees of freedom, i’m not too worried about all the extra regressors in terms of loss of power. What would be worrisome is if for example stop activity correlated strongly with any of the nuisance variables –  we can see from the orthogonality plot that in this subject at least, that is not the case. Now lets see if we actually have anything interesting left over after we remove all that noise:

stop SPM

We can see that the Stop-related activity seems pretty reasonable, clustering around the motor and premotor cortex, bilateral insula, and DLPFC, all canonical motor inhibition regions (FWE-cluster corrected p = 0.05). This is a good sign! Now what about all those physiological regressors? Are they doing anything of value, or just sucking up our power? Here is the f-contrast over the pulse regressors:

pulse

Here we can see that the peak signal is wrapped right around the pons/upper brainstem. This makes a lot of sense- the area is full of the primary vasculature that ferries blood into and out of the brain. If I was particularly interested in getting signal from the brainstem in this project, I could use a respiration x pulse interaction regressor to better model this6. Penny et al find similar results to our cardiac F-test when comparing AR(1) with higher order AR models [7]. But since we’re really only interested in higher cortical areas, the pulse regressor should be sufficient. We can also see quite a bit of variance explained around the bilateral insula and rostral anterior cingulate. Interestingly, our stop-related activity still contained plenty of significant insula response, so we can feel better that some but not all of the signal from that region is actually functionally relevant. What about respiration?

resp

Here we see a ton of variance explained around the occipital lobe. This makes good sense- we tend to just slightly nod our head back and forth along the z-axis as we breath. What we are seeing is the motion-induced artifact of that rotation, which is most severe along the back of the head and periphery of the brain. We see a similar result for the overall motion regressors, but flipped to the front:

Ignore the above, respiration regressor is not viable due to “clipping”, see note at top of post. Glad I warned everyone that this post was “in progress” 🙂 Respiration should be a bit more global, restricted to ventricles and blood vessels.

motion

Wow, look at all the significant activity! Someone call up Nature and let them know, motion lights up the whole brain! As we would expect, the motion regressor explains a ton of uninteresting variance, particularly around the prefrontal cortex and periphery.

I still have a ways to go on this project- obviously this is just a single subject, and the results could vary wildly. But I do think even at this point we can start to see that it is quite easy and desirable to model these effects in your data (Note: we had some technical failure due to the respiration belt being a POS…) I should note that in SPM, these sources of “non-white” noise are typically modeled using an autoregressive (AR(1)) model, which is enabled in the default settings (we’ve turned it off here). However as there is evidence that this model performs poorly at faster TRs (which are the norm now), and that a noise-modelling approach can greatly improve SnR while removing artifacts, we are likely to get better performance out of a nuisance regression technique as demonstrated here [4]. The next step will be to take these regressors to a second level analysis, to examine if the meditation group has significantly more BOLD variance-explained by physiological noise than do controls. Afterwards, I will re-run the analysis without any physio parameters, to compare the results of both.

References:


1. Birn RM, Diamond JB, Smith MA, Bandettini PA.
Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.
Neuroimage. 2006 Jul 15;31(4):1536-48. Epub 2006 Apr 24.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

3. Glover GH, Li TQ, Ress D.
Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR.
Magn Reson Med. 2000 Jul;44(1):162-7.

4. Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE.
Non-white noise in fMRI: does modelling have an impact?
Neuroimage. 2006 Jan 1;29(1):54-66.

5. Wise RG, Ide K, Poulin MJ, Tracey I.
Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal.
Neuroimage. 2004 Apr;21(4):1652-64.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

7. Penny, W., Kiebel, S., & Friston, K. (2003). Variational Bayesian inference for fMRI time series. NeuroImage, 19(3), 727–741. doi:10.1016/S1053-8119(03)00071-5

Google Wave for Scholarly Co-authorship: excerpt from Neuroplasticity and Consciousness Abstract

Gary Williams and I are working together on a paper investigating the consciousness and neuroplasticity. We’re using Google wave for this collaboration, and I must say it is an excellent co-authorship tool. There is nothing quite so neat as watching your ideas flow and meld together in real time. There are now new built in document templates that make these kinds of projects a blast. As an added bonus, all edits are identified and tracked in real time, letting you keep easy track of who wrote what. One of the most suprising things to come out of this collaboration is the newness of the thoughts. Whatever it is we end up arguing, it is definetely not reducible to the sum of it’s parts. As a teaser, I thought I’d post a thread from the wave I made this morning. This is basically just me rambling on about consciousness and plasticity after reading the results of our wave. I wish I could post the movie of our edits, but that will have to wait for the paper’s submission.

I have an idea I want to work in that was provoked by this paper:
http://www.jneurosci.org/cgi/content/abstract/30/18/6205

Somewhere in here I still feel a nagging paradox, but I can’t seem to put my finger on it. Maybe I’m simply trying to explain something I don’t have an explanation for. I’m not sure. Consider this a list of thoughts that may or may not have any relationship to the kind of account we want to make here.

They basically show that different synthesthetic experiences have different neural correlates in the structural brain matter. I think it would be nice to tie our paper to the (likely) focus of the other papers; the idea of changing qualia / changing NCCs. Maybe we can argue that, due to neural plasticity, we should not expect ‘neural representations’ for sensory experience between any two adults to be identical; rather we should expect that every individual develops their own unique representational qualia that are partially ineffable. Then we can argue that it this is precisely why we must rely on narrative scaffolding to make sense of the world; it is only through practice with narrative, engendered by frontal plasticity, that we can understand the statistical similarities between our qualia and others. Something is not quite right in this account though… and our abstract is basically fine as is.

So, I have my own unique qualia that are constantly changing- my qualia and NCCs are in dynamical flux with one another. However, my embodiment pre-configures my sensory experience to have certain common qualities across the species. Narrative explanations of the world are grounded in capturing this intersubjectivity; they are linguistic representations of individual sense impressions woven together by cultural practices and schema. What we want to say is that, I am able to learn about the world through narrative practice precisely because I am able to map my own unique sensory representations onto others.

I guess that last part of what I said is still weak, but it seems like this could be a good element to explore in the abstract. It keeps us from being too far away from the angle of the call though, maybe. I can’t figure out exactly what I want to say. There are a few elements:

  • Narratives are co-created, coherent, shareable, complex representations of the world that encode temporality, meaning, and intersubjectivity.
  • I’m able to learn about these representations of the world through narrative practice; by mapping my own unique dynamic sensory experience to the sensory and folk psychological narratives of others.
  • Narrative encodes sensory experience in ways that transcend the limits of personal qualia; they are offloaded and are no longer dynamic in the same way.
  • Sensory experience is in constant flux and can be thrown out of alignment with narrative, as in the case of most psychopathy.
  • I need some way to structure this flux; narrative is intersubjective and it provides second order qualia??
  • Narrative must be plastic as it is always growing; the relations between events, experiences, and sensory representations must always be shifting. Today I may really enjoy the smell of flowers and all the things that come with them (memory of a past girlfriend, my enjoyment of things that smell sweet, the association I have with hunger). But tommorow I might get buried alive in some flowers; now my sensory representation for flowers is going to have all new associations. I may attend to a completely different set of salient factors; I might find that the smell now reminds me of a grave, that I remember my old girlfriend was a nasty bitch, and that I’m allergic to sweet things. This must be reflected in the connective weights of the sensory representations; the overall connectivity map has been altered because a node (the flower node) has been drastically altered by a contra-narrative sensory trauma.
  • I think this is a crucial account and it helps explain the role of the default mode in consciousness. On this account, the DMN is the mechanism driving reflective, spontaneous narrativization of the world. These oscillations are akin to the constant labeling and scanning of my sensory experience. That they in sleep probably indicates that this process is highly automatic and involved in memory formation. As introspective thoughts begin to gain coherency and collude together, they gain greater roles in my over all conscious self-narrative.
  • So I think this is what I want to say: our pre-frontal default mode is system is in constant flux. The nodes are all plastic, and so is the pattern of activations between them. This area is fundamentally concerned with reflective-self relatedness and probably develops through childhood interaction. Further, there is an important role of control here. I think that a primary function of social-constructive brain areas is in the control of action. Early societies developed complex narrative rule systems precisely to control and organize group action. This allowed us to transcend simple brute force and begin to coordinate action and to specialize in various agencies. The medial prefrontal cortex, the central node, fundementally invoked in acts of social cognition and narrative comprehension, has massive reciprocal connectivity to limbic areas, and also pre-frontal areas concerned with reward and economic decision making.
  • We need a plastic default mode precisely to allow for the kinds of radical enculturation we go through during development. It is quite difficult to teach an infant, born with the same basic equipment as a caveman, the intricacies of mathematics and philosophy. Clearly narrative comprehension requires a massive amount of learning; we must learn all of the complex cultural nuances that define us as modern humans.
  • Maybe sensory motor coupling and resonance allow for the simulation of precise spatiotemporal activity patterns. This intrinsic activity is like a constant ‘reading out’ of the dynamic sensory representations that are being constantly updated, through neuroplasticity; whatever the totality of the connection weights, that is my conscious narrative of my experience.
  • Back to the issue of control. It’s clear to me that the prefrontal default system is highly sensitive to intersubjective or social information/cues. I think there is really something here about offloading intentions, which are relatively weak constructions, into the group, where they can be collectively acted upon (like in the drug addict/rehab example). So maybe one role of my narration system is simply to vocalize my sensory experience (I’m craving drugs. I can’t stop craving drugs) so that others can collectively act on them.

Well there you have it. I have a feeling this is going to be a great paper. We’re going to try and flip the whole debate on it’s head and argue for a central role of plasticity in embodied and narrative consciousness. It’s great fun to be working with Gary again; his mastery of philosophy of mind and phenomenology are quite fearsome, and we’ve been developing these ideas forever. I’ll be sure to post updates from GWave as the project progresses.

Snorkeling ’the shallows’: what’s the cognitive trade-off in internet behavior?

I am quite eager to comment on the recent explosion of e-commentary regarding Nicolas Carr’s new book. Bloggers have already done an excellent job summarizing the response to Carr’s argument. Further, Clay Shirky and Jonah Lehrer have both argued convincingly that there’s not much new about this sort of reasoning. I’ve also argued along these lines, using the example of language itself as a radical departure from pre-linguistic living. Did our predecessors worry about their brains as they learned to represent the world with odd noises and symbols?

Surely they did not. And yet we can also be sure that the brain underwent a massive revolution following the acquisition of language. Chomsky’s linguistics would of course obscure this fact, preferring us to believe that our linguistic abilities are the amalgation of things we already possessed: vision, problem solving, auditory and acoustic control. I’m not going to spend too much time arguing against the modularist view of cognition however; chances are if you are here reading this, you are already pretty convinced that the brain changes in response to cultural adaptations.

It is worth sketching out a stock Chomskyian response however. Strict nativists, like Chomsky, hold that our language abilities are the product of an innate grammar module. Although typically agnostic about the exact source of this module (it could have been a genetic mutation for example), nativists argue that plasticity of the brain has no potential other than slightly enhancing or decreasing our existing abilities. You get a language module, a cognition module, and so on, and you don’t have much choice as to how you use that schema or what it does. The development of anguage on this view wasn’t something radically new that changed the brain of its users but rather a novel adaptation of things we already and still have.

To drive home the point, it’s not suprising that notable nativist Stephen Pinker is quoted as simply not buying the ‘changing our brains’ hypothesis:

“As someone who believes both in human nature and in timeless standards of logic and evidence, I’m skeptical of the common claim that the Internet is changing the way we think. Electronic media aren’t going to revamp the brain’s mechanisms of information processing, nor will they supersede modus ponens or Bayes’ theorem. Claims that the Internet is changing human thought are propelled by a number of forces: the pressure on pundits to announce that this or that “changes everything”; a superficial conception of what “thinking” is that conflates content with process; the neophobic mindset that “if young people do something that I don’t do, the culture is declining.” But I don’t think the claims stand up to scrutiny.”

Pinker makes some good points- I agree that a lot of hype is driven by the kinds of thinking he mentions. Yet, I do not at all agree that electronic media cannot and will not revamp our mechanisms for information processing. In contrast to the nativist account, I think we’ve better reason than ever to suspect that the relation between brain and cognition is not 1:1 but rather dynamic, evolving with us as we develop new tools that stimulate our brains in unique and interesting ways.

The development of language massively altered the functioning of our brain. Given the ability to represent the world externally, we no longer needed to rely on perceptual mechanisms in the same way. Our ability to discriminate amongst various types of plant, or sounds, is clearly sub-par to that of our non-linguistic brethren. And so we come full circle. The things we do change our brains. And it is the case that our brains are incredibly economical. We know for example that only hours after limb amputation, our somatosensory neurons invade the dormant cells, reassigning them rather than letting them die off. The brain is quite massively plastic- Nicolas Carr certainly gets that much right.

Perhaps the best way to approach this question is with an excerpt from social media. I recently asked of my fellow tweeps,

To which an astute follower replied:

Now, I do realize that this is really the central question in the ‘shallows’ debate. Moving from the basic fact that our brains are quite plastic, we all readily accept that we’re becoming the subject of some very intense stimulation. Most social media, or general internet users, shift rapidly from task to task, tweet to tweet. In my own work flow, I may open dozens and dozens of tabs, searching for that one paper or quote that can propel me to a new insight. Sometimes I get confused and forget what I was doing. Yet none of this interferes at all with my ‘deep thinking’. Eventually I go home and read a fantastic sci-fi book like Snowcrash. My imagination of the book is just as good as ever; and I can’t wait to get online and start discussing it. So where is the trade-off?

So there must be a trade-off, right? Tape a kitten’s eyes shut and its visual cortex is re-assigned to other sensory modalities. The brain is a nasty economist, and if we’re stimulating one new thing we must be losing something old. Yet what did we lose with language? Perhaps we lost some vestigial abilities to sense and smell. Yet we gained the power of the sonnet, the persuasion of rhetoric, the imagination of narrative, the ability to travel to the moon and murder the earth.

In the end, I’m just not sure it’s the right kind of stimulation. We’re not going to lose our ability to read in fact, I think I can make an extremely tight argument against the specific hypothesis that the internet robs us of our ability to deep-think. Deep thinking is itself a controversial topic. What exactly do we mean by it? Am I deep thinking if I spend all day shifting between 9 million tasks? Nicolas Carr says no, but how can he be sure those 9 million tasks are not converging around a central creative point?

I believe, contrary to Carr, that internet and social media surfing is a unique form of self stimulation and expression. By interacting together in the millions through networks like twitter and facebook, we’re building a cognitive apparatus that, like language, does not function entirely within the brain. By increasing access to information and the customizability of that access, we’re ensuring that millions of users have access to all kinds of thought-provoking information. In his book, Carr says things like ‘on the internet, there’s no time for deep thought. it’s go go go’. But that is only one particular usage pattern, and it ignores ample research suggesting that posts online may in fact be more reflective and honest than in-person utterances (I promise, I am going to do a lit review post soon!)

Today’s internet user doesn’t have to conform to whatever Carr thinks is the right kind of deep-thought. Rather, we can ‘skim the shallows’ of twitter and facebook for impressions, interactions, and opinions. When I read a researcher, I no longer have to spend years attending conferences to get a personal feel for them. I can instead look at their wikipedia, read the discussion page, see what’s being said on twitter. In short, skimming the shallows makes me better able to choose the topics I want to investigate deeply, and lets me learn about them in whatever temporal pattern I like. Youtube with a side of wikipedia and blog posts? Yes please. It’s a multi-modal whole brain experience that isn’t likely to conform to ‘on/off’ dichotomies. Sure, something may be sacrificed, but it may not be. It might be that digital technology has enough of the old (language, vision, motivation) plus enough of the new that it just might constitute or bring about radically new forms of cognition. These will undoubtably change or cognitive style, perhaps obsoleting Pinker’s Bayesian mechanisms in favor of new digitally referential ones.

So I don’t have an answer for you yet ToddStark. I do know however, that we’re going to have to take a long hard look at the research review by Carr. Further, it seems quite clear that there can be no one-sided view of digital media. It’s not anymore intrinsically good or bad than language. Language can be used to destroy nations just as it can tell a little girl a thoughtful bed time story. If we’re to quick to make up our minds about what internet-cognition is doing to our plastic little brains, we might miss the forest for the trees. The digital media revolution gives us the chance to learn just what happens in the brain when its’ got a shiny new tool. We don’t know the exact nature of the stimulation, and finding out is going to require a look at all the evidence, for and against. Further, it’s a gross oversimplification to talk about internet behavior as ‘shallow’ or ‘deep’. Research on usage and usability tells us this; there are many ways to use the internet, and some of them probably get us thinking much deeper than others.