fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:

trance.PNG

seed.PNG

Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.

 

How useful is twitter for academics, really?

Recently I was intrigued by a post on twitter conversion rates (e.g. the likelihood that a view on your tweet results in a click on the link) by journalist Derek Thompson at the Atlantic. Derek writes that although using twitter gives him great joy, he’s not sure it results in the kinds of readership his employers would feel merits the time spent on the service. Derek found that even his most viral tweets only resulted in a conversion rate of about 3% – on par with the click-through rate of east asian display ads (i.e. quite poorly in the media world). Using the recently released twitter metrics, Derek found an average conversion of around 1.5% with the best posts hitting the 3% ceiling. Ultimately he concludes that twitter seems to be great at generating buzz within the twitter-sphere but performs poorly at translating that buzz into external influence.

This struck my curiosity, as it definitely reflected my own experience tweeting out papers and tracking the resultant clicks on the actual paper itself. However, the demands of academia are quite different than that of corporate media. In my experience ‘good’ posts do exactly result in a 2-3% conversion rate, or about 30 clicks on the DOI link for every 1000 views. A typical post I consider ‘successful’ will net about 5-8k views and thus 150-200 clicks. Below are some samples of my most ‘successful’ paper tweets this month, with screen grabs of the twitter analytics for each:

Screenshot 2015-12-04 11.42.45

Screenshot 2015-12-04 11.44.00

Screenshot 2015-12-04 11.44.41

Screenshot 2015-12-04 11.45.29

Sharing each of these papers resulted in a conversion rate of about 2%, roughly in line with Derek’s experience. These are all what I would consider ‘successful’ shares, at least for me, with > 100 engagements each. You can also see that in total, external engagement (i.e., clicking the link to the paper) is below that of ‘internal’ engagement (likes, RTs, expands etc). So it does appear that on the whole twitter shares may generate a lot of internal ‘buzz’ but not necessarily reach very far beyond twitter.

For a corporate sponsor, these conversion rates are unacceptable. But for academics, I would argue the ceiling of the actually interested audience is somewhere around 200 people, which corresponds pretty well with the average paper clicks generated by successful posts. Academics are so highly specialized that i’d wager citation success is really more about making sure your paper falls into the right hands, rather than that of people working in totally different areas. I’d suggest that even for landmark ‘high impact’ papers eventual success will still be predicted more by the adoption of your select core peer group (i.e. other scientists who study vegetarian dinosaur poop in the Himalayan range). In my anecdotal experience, I would say that I more regularly find papers that grab my interest on twitter than any other experience. Moreover, unlike ‘self finds’, it seems to me papers found on twitter are more likely to be outside my immediate wheelhouse – statistics publications, genetics, etc. This is an important, if difficult to quantify type of impact.

In general, we have  to ask what exactly is a 2-3% conversion rate worth? If 200 people click my paper link, are any of them actually reading it? To probe this a bit further I used twitters new survey tool, which recently added support for multiple options, to ask my followers about how often they read papers found on twitter:

Screenshot 2015-12-04 10.58.42.png

As you can see, out of 145 responses more than 50% in total said they read papers found on twitter “Occasionally” (52%) or “Frequently” (30%). If these numbers are at all representative, I think they are pretty reassuring for the academic twitter user. As many as ~45 out of ~150 respondents say they read papers on twitter “frequently” suggesting the service has become a major source for finding interesting papers, at least among its users. All together my take away is that while you shouldn’t expect to beat the general 3% curve, the ability to get your published work on the desks of as many as 50-100 members of your core audience is pretty powerful. This is a more tangible result than ‘engagements’ or conversion rates.

Finally, it’s worth noting that the picture on how this all relates to citation behavior is murky at best. A quick surf of the scientific literature on correlating citation rate and social media exposure is inconclusive at best. Two papers found by Neurocritic are examplars of my impression of this literature, with one claiming a large effect size and the other claiming none at all. In the end I suspect how useful twitter is for sharing research really depends on several factors including your field (e.g. probably more useful for machine learning than organic chemistry) and something i’d vaguely define as ‘network quality’. Ultimately I suspect the rule is quality of followers over quantity; if your end goal is to get your papers in the hands of a round 200 engaged readers (which twitter can do for you) then having a network that actually includes those people is probably worth more than being a ‘Kardashian’ of social media.

 

Top 200 terms in cognitive neuroscience according to neurosynth

Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”)  and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.

Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:

wordle

Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there 😀

Direct link to the wordle:

Wordle: neurosynth