fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:

trance.PNG

seed.PNG

Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.

 

A walk in the park increases poor research practices and decreases reviewer critical thinking

Or so i’m going to claim because science is basically about making up whatever qualitative opinion you like and hard-selling it to a high impact journal right? Last night a paper appeared in PNAS early access entitled “Nature experience reduces rumination and subgenual prefrontal cortex activation” as a contributed submission. Like many of you I immediately felt my neurocringe brain area explode with activity as I began to smell the sickly sweet scent of gimmickry. Now I don’t have a lot of time so I was worried I wouldn’t be able to cover this paper in any detail. But never to worry, because the entire paper is literally two ANOVAs!

Don't think about it too much.
Look guys, we’re headed to PNAS! No, no, leave the critical thinking skills, we won’t be needing those where we’re going!

The paper begins with a lofty appeal to our naturalistic sensibilities; we’re increasingly living in urban areas, this trend is associated with poor mental health outcomes, and by golly-gee, shouldn’t we have a look at the brain to figure this all out? The authors set about testing their hypothesis by sending 19 people out into the remote wilderness of the Stanford University campus, or an urban setting:

The nature walk took place in a greenspace near Stanford University spanning an area ∼60 m northwest of Junipero Serra Boulevard and extending away from the street in a 5.3-km loop, including a significant stretch that is far (>1 km) from the sounds and sights of the surrounding residential area. As one proxy for urbanicity, we measured the proportion of impervious surface (e.g., asphalt, buildings, sidewalks) within 50 m of the center of the walking path (Fig. S4). Ten percent of the area within 50 m of the center of the path comprised of impervious surface (primarily of the asphalt path). Cumulative elevation gain of this walk was 155 m. The natural environment of the greenspace comprises open California grassland with scattered oaks and native shrubs, abundant birds, and occasional mammals (ground squirrels and deer). Views include neighboring, scenic hills, and distant views of the San Francisco Bay, and the southern portion of the Bay Area (including Palo Alto and Mountain View to the south, and Menlo Park and Atherton to the north). No automobiles, bicycles, or dogs are permitted on the path through the greenspace.

Wow, where can I sign up for this truly Kerouac-inspired bliss? The control group on the other hand had to survive the horrors of the palo-alto urban wasteland:

The urban walk took place on the busiest thoroughfare in nearby Palo Alto (El Camino Real), a street with three to four lanes in each direction and a steady stream of traffic. Participants were instructed to walk down one side of the street in a southeasterly direction for 2.65 km, before turning around at a specific point marked on a map. This spot was chosen as the midpoint of the walk for the urban walk to match the nature walk with respect to total distance and exercise. Participants were instructed to cross the street at a pedestrian crosswalk/stoplight, and return on the other side of the street (to simulate the loop component of the nature walk and greatly reduce repeated encounters with the same environmental stimuli on the return portion of the walk), for a total distance of 5.3 km; 76% of the area within 50mof the center of this section of El Camino was comprised of impervious surfaces (of roads and buildings) (Fig. S4). Cumulative elevation gain of this walk was 4 m. This stretch of road consists of a significant amount of noise from passing cars. Buildings are almost entirely single- to double-story units, primarily businesses (fast food establishments, cell phone stores, motels, etc.). Participants were instructed to remain on the sidewalk bordering the busy street and not to enter any buildings. Although this was the most urban area we could select for a walk that was a similar distance from the MRI facility as the nature walk, scattered trees were present on both sides of El Camino Real. Thus, our effects may represent a conservative estimate of effects of nature experience, as our urban group’s experience was not devoid of natural elements.

And they got that approved by the local ethics board? The horror!

The authors gave both groups a self-reported rumination questionnaire before and after the walk, and also acquired some arterial spin labeling MRIs. Here is where the real fun gets started – and basically ends – as the paper is almost entirely comprised of group by time ANOVAs on these two measures. I wish I could say I was suprised by what I found in the results:
whattf

That’s right folks – the key behavioral interaction of the paper – is non-significant. Measly. Minuscule. Forget about p-values for a second and consider the gall it takes to not only completely skim over this fact (nowhere in the paper is it mentioned) and head right to the delicious t-tests, but to egregiously promote this ‘finding’ in the title, abstract, and discussion as showing evidence for an effect of nature on rumination! Erroneous interaction for the win, at least with PNAS contributed submissions right?! The authors also analyzed the brain data in the same way – this time actually sticking with their NHST – and find that some brain area that has been previously related to some bad stuff showed reduced activity. And that – besides a heart rate and respiration control analyses – is it. No correlations with the (non-significant) behavior. Just pure and simple reverse inference piled on top of fallacious interpretation of a non-significant interaction. Never-mind the wonky and poorly operationalized research question!

See folks, high impact science is easy! Just have friends in the National Academy…

I’ll leave you with this gem from the methods:

“‘One participant was eliminated in analysis of self-reported rumination due to a decrease in rumination after nature experience that was 3 SDs below the mean.'”

That dude REALLY got his time’s worth from the walk. Or did the researchers maybe forget to check if anyone smoked a joint during their nature walk?

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:

wordle

Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there 😀

Direct link to the wordle:

Wordle: neurosynth

We the Kardashians are Democratizing Science

I had a good laugh this weekend at a paper published to Genome Biology. Neil Hall, the author of the paper and well-established Liverpool biologist, writes that in the brave new era of social media, there “is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices.” Wow, what a punchline! According to Neil, we’re in danger of forgetting that tweets and blogposts are, according to him, the worthless gossip of academia. After all, who reads Nature and Science these days?? I know so many colleagues getting big grants and tenure track jobs just over their tweets! Never mind that Neil himself has about 11 papers published in Nature journals – or perhaps we are left to sympathize with the poor, untweeted author? Outside of bitter sarcasm, the article is a fun bit of satire, and I’d like to think charitably that it was aimed not only at ‘altmetrics’, but at the metric enterprise in general. Still, I agree totally with Kathryn Clancy that the joke fails insofar as it seems to be ‘punching down’ at those of us with less established CVs than Neil, who take to social media in order to network and advance our own fledgling research profiles. I think it also belies a critical misapprehension of how social media fits into the research ecosystem common among established scholars. This sentiment is expressed rather precisely by Neil when discussing his Kardashian index:

The Kardashian Index
The Kardashian Index

“In an age dominated by the cult of celebrity we, as scientists, need to protect ourselves from mindlessly lauding shallow popularity and take an informed and critical view of the value we place on the opinion of our peers. Social media makes it very easy for people to build a seemingly impressive persona by essentially ‘shouting louder’ than others. Having an opinion on something does not make one an expert.”

So there you have it. Twitter equals shallow popularity. Never mind the endless possibilities of having seamless networked interactions with peers from around the world. Never mind sharing the latest results, discussing them, and branching these interactions into blog posts that themselves evolve into papers. Forget entirely that without this infosphere of interaction, we’d be left totally at the whims of Impact Factor to find interesting papers among the thousands published daily. What it’s really all about is building a “seemingly impressive persona” by “shouting louder than others”. What then does constitute effective scientific output, Neil? The answer it seems – more high impact papers:

“I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive – if your K-index gets above 5, then it’s time to get off Twitter and write those papers.”

Well then, I’m glad we covered that. I’m sure there were many scientists or scholars out there who amid the endless cycle of insane job pressure, publish or perish horse-racing, and blood feuding for grants thought, ‘gee I’d better just stop this publishing thing entirely and tweet instead’. And likewise, I’m sure every young scientist looks at ‘Kardashians’ and thinks, ‘hey I’d better suspend all critical thinking, forget all my training, and believe everything this person says’. I hope you can feel me rolling my eyes.  Seriously though – this represents a fundamental and common misunderstanding of the point of all this faffing about on the internet. Followers, impact, and notoriety are all poorly understood side-effects of this process; they are neither the means nor goal. And never mind those less concrete (and misleading) contributions like freely shared code, data, or thoughts – the point here is to blather and gossip!

While a (sorta) funny joke, it is this point that is done the most disservice by Neil’s article. We (the Kardashians) are democratizing science. We are filtering the literally unending deluge of papers to try and find the most outrageous, the most interesting, and the most forgotten, so that they can see the light of day beyond wherever they were published and forgotten. We seek these papers to generate discussion and to garner attention where it is needed most. We are the academy’s newest, first line of defense, contextualizing results when the media runs wild with them. We tweet often because there is a lot to tweet, and we gain followers because the things we tweet are interesting. And we do all of this without the comfort of a lofty CV or high impact track record, with little concrete assurance that it will even benefit us, all while still trying to produce the standard signs of success. And it may not seem like it now – but in time it will be clear that what we do is just as much a part of the scientific process as those lofty Nature papers. Are we perfect? No. Do we sometimes fall victim to sensationalism or crowd mentality? Of course – we are only fallible human beings, trying to find and create utility within a new frontier. We may not be the filter science deserves – but we are the one it needs. Wear your Kardshian index with pride.

#MethodsWeDontReport – brief thought on Jason Mitchell versus the replicators

This morning Jason Mitchell self-published an interesting essay espousing his views on why replication attempts are essentially worthless. At first I was merely interested by the fact that what would obviously become a topic of heated debate was self-published, rather than going through the long slog of a traditional academic medium. Score one for self publication, I suppose. Jason’s argument is essentially that null results don’t yield anything of value and that we should be improving the way science is conducted and reported rather than publicising our nulls. I found particularly interesting his short example list of things that he sees as critical to experimental results which nevertheless go unreported:

These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session.  Someone without full possession of such know-how—perhaps because he is globally incompetent, or new to science, or even just new to neuroimaging specifically—could well be expected to bungle one or more of these important, yet unstated, experimental details.

While I don’t agree with the overall logic or conclusion of Jason’s argument (I particularly like Chris Said’s Bayesian response), I do think it raises some important or at least interesting points for discussion. For example, I agree that there is loads of potentially important stuff that goes on in the lab, particularly with human subjects and large scanners, that isn’t reported. I’m not sure to what extent that stuff can or should be reported, and I think that’s one of the interesting and under-examined topics in the larger debate. I tend to lean towards the stance that we should report just about anything we can – but of course publication pressures and tacit norms means most of it won’t be published. And probably at least some of it doesn’t need to be? But which things exactly? And how do we go about reporting stuff like how we respond to random participant questions regarding our hypothesis?

To find out, I’d love to see a list of things you can’t or don’t regularly report using the #methodswedontreport hashtag. Quite a few are starting to show up- most are funny or outright snarky (as seems to be the general mood of the response to Jason’s post), but I think a few are pretty common lab occurrences and are even though provoking in terms of their potentially serious experimental side-effects. Surely we don’t want to report all of these ‘tacit’ skills in our burgeoning method sections; the question is which ones need to be reported, and why are they important in the first place?

Short post: why I share (and share often)

If you follow my social media activities I am sure by now that you know me as a compulsive share-addict. Over the past four years I have gradually increased both the amount of incoming and outgoing information I attempt to integrate on a daily basis. I start every day with a now routine ritual of scanning new publications from 60+ journals and blogs using my firehose RSS feed, as well as integrating new links from various Science sub-reddits, my curated twitter cogneuro list, my friends and colleagues on Facebook, and email lists. I then in turn curate the best, most relevant to my interests, or in some cases the most outrageous of these links and share them back to twitter, facebook, reddit, and colleagues.

Of course in doing so, a frequent response from (particularly more senior) colleagues is: why?! Why do I choose to spend the time to both take in all that information and to share it back to the world? The answer is quite simple- in sharing this stuff I get critical feedback from an ever-growing network of peers and collaborators. I can’t even count the number of times someone has pointed out something (for better or worse) that I would have otherwise missed in an article or idea. That’s right, I share it so I can see what you think of it!  In this way I have been able to not only stay up to date with the latest research and concepts, but to receive constant invaluable feedback from all of you lovely brains :). In some sense I literally distribute my cognition throughout my network – thanks for the extra neurons!

From the beginning, I have been able not only to assess the impact of this stuff, but also gain deeper and more varied insights into its meaning. When I began my PhD I had the moderate statistical training of a BSc in psychology with little direct knowledge of neuroimaging methods or theory. Frankly it was bewildering. Just figuring out which methods to pay attention to, or what problems to look out for, was a headache-inducing nightmare. But I had to start somewhere and so I started by sharing, and sharing often. As a result almost every day I get amazing feedback pointing out critical insights or flaws in the things I share that I would have otherwise missed. In this way the entire world has become my interactive classroom! It is difficult to overstate the degree to which this interaction has enriched my abilities as a scientists and thinker.

It is only natural however for more senior investigators to worry about how much time one might spend on all this. I admit in the early days of my PhD I may have spent a bit too long lingering amongst the RSS trees and twitter swarms. But then again, it is difficult to place a price on the knowledge and know-how I garnered in this process (not to mention the invaluable social capital generated in building such a network!). I am a firm believer in “power procrastination”, which is just the process of regularly switching from more difficult but higher priority to more interesting but lower priority tasks. I believe that by spending my downtime taking in and sharing information, I’m letting my ‘default mode’ take a much needed rest, while still feeding it with inputs that will actually make the hard tasks easier.

In all, on a good day I’d say I spend about 20 minutes each morning taking in inputs and another 20 minutes throughout the day sharing them. Of course some days (looking at you Fridays) I don’t always adhere to that and there are those times when I have to ‘just say no’ and wait until the evening to get into that workflow. Productivity apps like Pomodoro have helped make sure I respect the balance when particularly difficult tasks arise. All in all however, the time I spend sharing is paid back tenfold in new knowledge and deeper understanding.

Really I should be thanking all of you, the invaluable peers, friends, colleagues, followers, and readers who give me the feedback that is so totally essential to my cognitive evolution. So long as you keep reading- I’ll keep sharing! Thanks!!

Notes: I haven’t even touched on the value of blogging and post-publication peer review, which of course sums with the benefits mentioned here, but also has vastly improved my writing and comprehension skills! But that’s a topic for another post!

( don’t worry, the skim-share cycle is no replacement for deep individual learning, which I also spend plenty of time doing!)

“you are a von economo neuron!” – Francesca 🙂

Fun fact – I read the excellent scifi novel Accelerando just prior to beginning my PhD. In the novel the main character is an info-addict who integrates so much information he gains a “5 second” prescience on events as they unfold. He then shares these insights for free with anyone who wants them, generating billion dollar companies (of which he owns no part in) and gradually manipulating global events to bring about a technological singularity. I guess you could say I found this to be a pretty neat character 🙂 In a serious vein though, I am a firm believer in free and open science, self-publication, and sharing-based economies. Information deserves to be free!

Surely, God loves the .06 (blob) nearly as much as the .05.

Image Credit: Dan Goldstein

“We are not interested in the logic itself, nor will we argue for replacing the .05 alpha with another level of alpha, but at this point in our discussion we only wish to emphasize that dichotomous significance testing has no ontological basis. That is, we want to underscore that, surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a fairly continuous function of the magnitude of p?”

Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.

This colorful quote came to mind while discussing significance testing procedures with colleagues over lunch. In Cognitive Neuroscience, with it’s enormous boon of obfuscated data, it seems we are so often met with these kinds of seemingly absurd, yet important statistical decisions. Should one correct p-values over the lifetime, as often suggested by our resident methodology expert? I love this suggestion; imagine an academia where the fossilized experts (no offense experts) are tossed aside for the newest and greenest researchers whose pool of p-values remains untapped!

Really though, just how many a priori anatomical hypothesis should one have sealed up in envelopes? As one colleague joked, it seems advantageous to keep a drawer full of wild speculations sealed away in case one’s whole-brain analysis fails to yield results. Of course we must observe and follow best scientific and statistical procedures to their maximum, but in truth a researcher often finds themselves at these obscure impasses, thousands of dollars in scanning funding spent, trying to decide whether or not they predicted a given region’s involvement. In these circumstances, it has even been argued that there is a certain ethical need to explore one’s data and not merely throw away all non-hypothesis fitting findings. While I do not support this claim, I believe it is worth considering. And further, I believe that a vast majority of the field, from the top institutions to the most obscure, often dip into these murky ethical realms.

This is one area I hope “data-driven” science, as in the Human Genome and Human Connectome projects, can succeed. It also points to a desperate need for publishing reform; surely what matters is not how many blobs fall on one side of an arbitrary distinction, but rather a full and accurate depiction of one’s data and it’s implications. In a perfect world, we would not need to obscure the truth hidden in these massive datasets while we hunt for sufficiently low p-values.

Rather we should publish a clear record, showing exactly what was done, what correlated with what, and also where significance and non-significance lie. Perhaps we might one day dream of combing through such datasets, actually explaining what drove the .06’s vs the .05’s. For now however, we must be careful not to look at our uncorrected statistical maps; for that way surely voodoo lie! And that is perhaps the greatest puzzle of all; two datasets, all things being equal. In one case the researcher writes down on paper, “blobs A, B, and C I shall see” and conducts significant ROI analyses on these regions. In the other he first examines the uncorrected map, notices blobs A, B, and C, and then conducts a region of interest analysis. In both cases, the results and data are the same. And yet one is classic statistical voodoo– double dipping- and the other perfectly valid hypothesis testing. It seems thus that our truth criterion lay not only with our statistics, but also in some way, in the epistemological ether.

Of course, it’s really more of a pragmatic distinction than an ontological one. The voodoo distinction serves not to delineate true from false results but rather to discourage researchers from engaging in risky practices that could inflate the risk of false-positives. All-in-all, I agree with Dorothy Bishop: we need to stop chasing the novel, typically spurious and begin to share and investigate our data in ways that create lasting, informative truths. The brain is simply too complex and expensive an object of study to let these practices build into an inevitable file-drawer of doom. It infuriates me how frustratingly obtuse many published studies are, even in top journals, regarding the precise methods and analysis that went into the paper. Wouldn’t we all rather share our data, and help explain it cohesively? I dread the coming collision between the undoubtably monolithic iceberg of unpublished negative findings, spurious positive findings, and our most trusted brain mapping paradigms.