fMRI study of Shamans tripping out to phat drumbeats

Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to  rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?

Anyway, Hove et al recruited 15 participants who were trained in  “core shamanism,” described as:

“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”

They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:



Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:

“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”

Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.

Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.

Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.

Hove MJ, Stelzer J, Nierhaus T, Thiel SD, Gundlach C, Margulies DS, Van Dijk KRA, Turner R, Keller PE, Merker B (2016) Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness. Cerebral Cortex 26:3116–3124.


oh BOLD where art thou? Evidence for a “mm-scale” match between intracortical and fMRI measures.

A frequently discussed problem with functional magnetic resonance imaging is that we don’t really understand how the hemodynamic ‘activations’ measured by the technique relate to actual neuronal phenomenon. This is because fMRI measures the Blood-Oxygenation-Level Dependent (BOLD) signal, a complex vascular response to neuronal activity. As such, neuroscientists can easily get worried about all sorts of non-neural contributions to the BOLD signal, such as subjects gasping for air, pulse-related motion artefacts, and other generally uninteresting effects. We can even start to worry that out in the lab, the BOLD signal may not actually measure any particular aspect of neuronal activity, but rather some overly diluted, spatially unconstrained filter that simply lacks the key information for understanding brain processes.

Given that we generally use fMRI over neurophysiological methods (e.g. M/EEG) when we want to say something about the precise spatial generators of a cognitive process, addressing these ambiguities is of utmost importance. Accordingly a variety of recent papers have utilized multi-modal techniques, for example combining optogenetics, direct recordings, and FMRI, to assess particularly which kinds of neural events contribute to alterations in the BOLD signal and it’s spatial (mis)localization. Now a paper published today in Neuroimage addresses this question by combining high resolution 7-tesla fMRI with Electrocorticography (ECoG) to determine the spatial overlap of finger-specific somatomotor representations captured by the measures. Starting from the title’s claim that “BOLD matches neuronal activity at the mm-scale”, we can already be sure this paper will generate a great deal of interest.

From Siero et al (In Press)

As shown above, the authors managed to record high resolution (1.5mm) fMRI in 2 subjects implanted with 23 x 11mm intracranial electrode arrays during a simple finger-tapping task. Motor responses from each finger were recorded and used to generate somatotopic maps of brain responses specific to each finger. This analysis was repeated in both ECoG and fMRI, which were then spatially co-registered to one another so the authors could directly compare the spatial overlap between the two methods. What they found appears at first glance, to be quite impressive:
From Siero et al (In Press)

Here you can see the color-coded t-maps for the BOLD activations to each finger (top panel, A), the differential contrast contour maps for the ECOG (middle panel, B), and the maximum activation foci for both measures with respect to the electrode grid (bottom panel, C), in two individual subjects. Comparing the spatial maps for both the index and thumb suggests a rather strong consistency both in terms of the topology of each effect and the location of their foci. Interestingly the little finger measurements seem somewhat more displaced, although similar topographic features can be seen in both. Siero and colleagues further compute the spatial correlation (Spearman’s R) across measures for each individual finger, finding an average correlation of .54, with a range between .31-.81, a moderately high degree of overlap between the measures. Finally the optimal amount of shift needed to reduce spatial difference between the measures was computed and found to be between 1-3.1 millimetres, suggesting a slight systematic bias between ECoG and fMRI foci.

Are ‘We the BOLD’ ready to breakout the champagne and get back to scanning in comfort, spatial anxieties at ease? While this is certainly a promising result, suggesting that the BOLD signal indeed captures functionally relevant neuronal parameters with reasonable spatial accuracy, it should be noted that the result is based on a very-best-case scenario, and that a considerable degree of unique spatial variance remains for the two methods. The data presented by Siero and colleagues have undergone a number of crucial pre-processing steps that are likely to influence their results: the high degree of spatial resolution, the manual removal of draining veins, the restriction of their analysis to grey-matter voxels only, and the lack of spatial smoothing all render generalizing from these results to the standard 3-tesla whole brain pipeline difficult. Indeed, even under these best-case criteria, the results still indicate up to 3mm of systematic bias in the fMRI results. Though we can be glad the bias was systematic and not random– 3mm is still quite a lot in the brain. On this point, the authors note that the stability of the bias may point towards a systematic miss-registration of the ECoG and FMRI data and/or possible rigid-body deformations introduced by the implantation of the electrodes), issues that could be addressed in future studies. Ultimately it remains to be seen whether similar reliability can be obtained for less robust paradigms than finger wagging, obtained in the standard sub-optimal imaging scenarios. But for now I’m happy to let fMRI have its day in the sun, give or take a few millimeters.

Siero, J. C. W., Hermes, D., Hoogduin, H., Luijten, P. R., Ramsey, N. F., & Petridou, N. (2014). BOLD matches neuronal activity at the mm scale: A combined 7T fMRI and ECoG study in human sensorimotor cortex. NeuroImage. doi:10.1016/j.neuroimage.2014.07.002


Effective connectivity or just plumbing? Granger Causality estimates highly reliable maps of venous drainage.

update: for an excellent response to this post, see the comment by Anil Seth at the bottom of this article. Also don’t miss the extended debate regarding the general validity of causal methods for fMRI at Russ Poldrack’s blog that followed this post. 

While the BOLD signal can be a useful measurement of brain function when used properly, the fact that it indexes blood flow rather than neural activity raises more than a few significant concerns. That is to say, when we make inferences on BOLD, we want to be sure the observed effects are causally downstream of actual neural activity, rather than the product of physiological noise such as fluctuations in breath or heart rate. This is a problem for all fMRI analyses, but is particularly tricky for resting state fMRI, where we are interested in signal fluctuations that fall in the same range as respiration and pulse. Now a new study has extended these troubles to granger causality modelling (GCM), a lag-based method for estimating causal interactions between time series, popular in the resting state literature. Just how bad is the damage?

In an article published this week in PLOS ONE, Webb and colleagues analysed over a thousand scans from the Human Connectome database, examining the reliability of GCM estimates and the proximity of the major ‘hubs’ identified by GCM with known major arteries and veins. The authors first found that GCM estimates were highly robust across participants:

Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).
Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).

They further report that “the largest [most robust] lags are for BOLD Granger causality differences for regions close to large veins and dural venous sinuses”. In other words, although the major ‘upstream’ and ‘downstream’ nodes estimated by GCM are highly robust across participants, regions primarily effecting other regions (e.g. causal outflow) map onto major arteries, whereas regions primarily receiving ‘inputs’  (e.g.  causal inflow) map onto veins. This pattern of ‘causation’ is very difficult to explain as anything other than a non-neural artifact, as it seems like the regions mostly ‘causing’ activity in others are exactly where you would have fresh blood coming into the brain, and regions primarily being influenced by others seem to be areas of major blood drainage. Check out the arteriogram and venogram provided by the authors:

Depiction of major arteries (top image) and veins (bottom). Not overlap with areas of greatest G-cause (below).
Depiction of major arteries (top image) and veins (bottom). Note overlap with areas of greatest G-cause (below).

Compare the above to their thresholded z-statistic map for significant granger causality; white are areas of significant g-causation overlapping with an ateriogram mask, green are significant areas overlapping with a venogram mask:

From paper:
“Figure 5. Mean Z-statistic for significant Granger causality differences to seed ROIs. Z-statistics were averaged for a given target ROI with the 264 seed ROIs to which it exhibited significantly asymmetric Granger causality relationship. Masks are overlaid for MRI arteriograms (white) and MRI venograms (green) for voxels with greater than 2 standard deviations signal intensity of in-brain voxels in averaged images from 33 (arteriogram) and 34 (venogram) subjects. Major arterial inflow and venous outflow distributions are labeled.”

It’s fairly obvious from the above that a significant proportion of the areas typically G-causing other areas overlap with arteries, whereas areas typically being g-caused by others overlap with veins. This is a serious problem for GCM of resting state fMRI, and worse, these effects were also observed for a comprehensive range of task-based fMRI data. The authors come to the grim conclusion that “Such arterial inflow and venous drainage has a highly reproducible pattern across individuals where major arterial and venous distributions are largely invariant across subjects, giving the illusion of reliable timing differences between brain regions that may be completely unrelated to actual differences in effective connectivity”. Importantly, this isn’t the first time GCM has been called into question. A related concern is the impact of spatial variation in the lag between neural activation and the BOLD response (the ‘hemodynamic response function’, HRF) across the brain. Previous work using simultaneous intracranial and BOLD recordings has shown that due to these lags, GCM can estimate a causal pattern of A then B, whereas the actual neural activity was B then A.

This is because GCM acts in a relatively simple way; given two time-series (A & B), if a better estimate of the future state of B can be predicted by the past fluctation of both A and B than that provided by B alone, then A is said to G-cause B.  However, as we’ve already established, BOLD is a messy and complex signal, where neural activity is filtered through slow blood fluctuations that must be carefully mapped back onto to neural activity using deconvolution methods. Thus, what looks like A then B in BOLD, can actually be due to differences in HRF lags between regions – GCM is blind to this as it does not consider the underlying process producing the time-series. Worse, while this problem can be resolved by combining GCM (which is naïve to the underlying cause of the analysed time series) with an approach that de-convolves each voxel-wise time-series with a canonical HRF, the authors point out that such an approach would not resolve the concern raised here that granger causality largely picks up macroscopic temporal patterns in blood in- and out-flow:

“But even if an HRF were perfectly estimated at each voxel in the brain, the mechanism implied in our data is that similarly oxygenated blood arrives at variable time points in the brain independently of any neural activation and will affect lag-based directed functional connectivity measurements. Moreover, blood from one region may then propagate to other regions along the venous drainage pathways also independent of neural to vascular transduction. It is possible that the consistent asymmetries in Granger causality measured in our data may be related to differences in HRF latency in different brain regions, but we consider this less likely given the simpler explanation of blood moving from arteries to veins given the spatial distribution of our results.”

As for correcting for these effects, the authors suggest that a nuisance variable approach estimating vascular effects related to pulse, respiration, and breath-holding may be effective. However, they caution that the effects observed here (large scale blood inflow and drainage) take place over a timescale an order of magnitude slower than actual neural differences, and that this approach would need extremely precise estimates of the associated nuisance waveforms to prevent confounded connectivity estimates. For now, I’d advise readers to be critical of what can actually  be inferred from GCM until further research can be done, preferably using multi-modal methods capable of directly inferring the impact of vascular confounds on GCM estimates. Indeed, although I suppose am a bit biased, I have to ask if it wouldn’t be simpler to just use Dynamic Causal Modelling, a technique explicitly designed for estimating causal effects between BOLD timeseries, rather than a method originally designed to estimate influences between financial stocks.

References for further reading:

Friston, K. (2009). Causal modelling and brain connectivity in functional magnetic resonance imaging. PLoS biology, 7(2), e33. doi:10.1371/journal.pbio.1000033

Friston, K. (2011). Dynamic causal modeling and Granger causality Comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution. NeuroImage, 58(2), 303–5; author reply 310–1. doi:10.1016/j.neuroimage.2009.09.031

Friston, K., Moran, R., & Seth, A. K. (2013). Analysing connectivity with Granger causality and dynamic causal modelling. Current opinion in neurobiology, 23(2), 172–8. doi:10.1016/j.conb.2012.11.010

Webb, J. T., Ferguson, M. a., Nielsen, J. a., & Anderson, J. S. (2013). BOLD Granger Causality Reflects Vascular Anatomy. (P. A. Valdes-Sosa, Ed.)PLoS ONE, 8(12), e84279. doi:10.1371/journal.pone.0084279

Chang, C., Cunningham, J. P., & Glover, G. H. (2009). Influence of heart rate on the BOLD signal: the cardiac response function. NeuroImage, 44(3), 857–69. doi:10.1016/j.neuroimage.2008.09.029

Chang, C., & Glover, G. H. (2009). Relationship between respiration, end-tidal CO2, and BOLD signals in resting-state fMRI. NeuroImage, 47(4), 1381–93. doi:10.1016/j.neuroimage.2009.04.048

Lund, T. E., Madsen, K. H., Sidaros, K., Luo, W.-L., & Nichols, T. E. (2006). Non-white noise in fMRI: does modelling have an impact? Neuroimage, 29(1), 54–66.

David, O., Guillemain, I., Saillet, S., Reyt, S., Deransart, C., Segebarth, C., & Depaulis, A. (2008). Identifying neural drivers with functional MRI: an electrophysiological validation. PLoS biology, 6(12), 2683–97. doi:10.1371/journal.pbio.0060315

Update: This post continued into an extended debate on Russ Poldrack’s blog, where Anil Seth made the following (important) comment 

Hi this is Anil Seth.  What an excellent debate and I hope I can add few quick thoughts of my own since this is an issue close to my heart (no pub intended re vascular confounds).

First, back to the Webb et al paper. They indeed show that a vascular confound may affect GC-FMRI but only in the resting state and given suboptimal TR and averaging over diverse datasets.  Indeed I suspect that their autoregressive models may be poorly fit so that the results rather reflect a sort-of mental chronometry a la Menon, rather than GC per se.
In any case the more successful applications of GC-fMRI are those that compare experimental conditions or correlate GC with some behavioural variable (see e.g. Wen et al.  In these cases hemodynamic and vascular confounds may subtract out.
Interpreting findings like these means remembering that GC is a description of the data (i.e. DIRECTED FUNCTIONAL connectivity) and is not a direct claim about the underlying causal mechanism (e.g. like DCM, which is a measure of EFFECTIVE connectivity).  Therefore (model light) GC and (model heavy) DCM are to a large extent asking and answering different questions, and to set them in direct opposition is to misunderstand this basic point.  Karl, Ros Moran, and I make these points in a recent review (
Of course both methods are complex and ‘garbage in garbage out’ applies: naive application of either is likely to be misleading or worse.  Indeed the indirect nature of fMRI BOLD means that causal inference will be very hard.  But this doesn’t mean we shouldn’t try.  We need to move to network descriptions in order to get beyond the neo-phrenology of functional localization.  And so I am pleased to see recent developments in both DCM and GC for fMRI.  For the latter, with Barnett and Chorley I have shown that GC-FMRI is INVARIANT to hemodynamic convolution given fast sampling and low noise (  This counterintuitive finding defuses a major objection to GC-fMRI and has been established both in theory, and in a range of simulations of increasing biophysical detail.  With the development of low-TR multiband sequences, this means there is renewed hope for GC-fMRI in practice, especially when executed in an appropriate experimental design.  Barnett and I have also just released a major new GC software which avoids separate estimation of full and reduced AR models, avoiding a serious source of bias afflicting previous approaches (
Overall I am hopeful that we can move beyond premature rejection of promising methods on the grounds they fail when applied without appropriate data or sufficient care.  This applies to both GC and fMRI. These are hard problems but we will get there.

Mind-wandering and metacognition: variation between internal and external thought predicts improved error awareness

Yesterday I published my first paper on mind-wandering and metacognition, with Jonny Smallwood, Antoine Lutz, and collaborators. This was a fun project for me as I spent much of my PhD exhaustively reading the literature on mind-wandering and default mode activity, resulting in a lot of intense debate a my research center. When we had Jonny over as an opponent at my PhD defense, the chance to collaborate was simply too good to pass up. Mind-wandering is super interesting precisely because we do it so often. One of my favourite anecdotes comes from around the time I was arguing heavily for the role of the default mode in spontaneous cognition to some very skeptical colleagues.  The next day while waiting to cross the street, one such colleague rode up next to me on his bicycle and joked, “are you thinking about the default mode?” And indeed I was – meta-mind-wandering!

One thing that has really bothered me about much of the mind-wandering literature is how frequently it is presented as attention = good, mind-wandering = bad. Can you imagine how unpleasant it would be if we never mind-wandered? Just picture trying to solve a difficult task while being totally 100% focused. This kind of hyper-locking attention can easily become pathological, preventing us from altering course when our behaviour goes awry or when something internal needs to be adjusted. Mind-wandering serves many positive purposes, from stimulating our imaginations, to motivating us in boring situations with internal rewards (boring task… “ahhhh remember that nice mojito you had on the beach last year?”). Yet we largely see papers exploring the costs – mood deficits, cognitive control failure, and so on. In the meditation literature this has even been taken up to form the misguided idea that meditation should reduce or eliminate mind-wandering (even though there is almost zero evidence to this effect…)

Sometimes our theories end up reflecting our methodological apparatus, to the extent that they may not fully capture reality. I think this is part of what has happened with mind-wandering, which was originally defined in relation to difficult (and boring) attention tasks. Worse, mind-wandering is usually operationalized as a dichotomous state (“offtask” vs “ontask”) when a little introspection seems to strongly suggest it is much more of a fuzzy, dynamic transition between meta-cognitive and sensory processes. By studying mind-wandering just as the ‘amount’ (or mean) number of times you were “offtask”, we’re taking the stream of consciousness and acting as if the ‘depth’ at one point in the river is the entire story – but what about flow rate, tidal patterns, fishies, and all the dynamic variability that define the river? My idea was that one simple way get at this is by looking at the within-subject variability of mind-wandering, rather than just the overall mean “rate”.  In this way we could get some idea of the extent to which a person’s mind-wandering was fluctuating over time, rather than just categorising these events dichotomously.

The EAT task used in my study, with thought probes.
The EAT task used in my study, with thought probes.

To do this, we combined a classical meta-cognitive response inhibition paradigm, the “error awareness task” (pictured above), with standard interleaved “thought-probes” asking participants to rate on a scale of 1-7 the “subjective frequency” of task-unrelated thoughts in the task interval prior to the probe.  We then examined the relationship between the ability to perform the task or “stop accuracy” and each participant’s mean task-unrelated thought (TUT). Here we expected to replicate the well-established relationship between TUTs and attention decrements (after all, it’s difficult to inhibit your behaviour if you are thinking about the hunky babe you saw at the beach last year!). We further examined if the standard deviation of TUT (TUT variability) within each participant would predict error monitoring, reflecting a relationship between metacognition and increased fluctuation between internal and external cognition (after all, isn’t that kind of the point of metacognition?). Of course for specificity and completeness, we conducted each multiple regression analysis with the contra-variable as control predictors. Here is the key finding from the paper:

Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.
Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.

As you can see in the bottom right, we clearly replicated the relationship of increased overall TUT predicting poorer stop performance. Individuals who report an overall high intensity/frequency of mind-wandering unsurprisingly commit more errors. What was really interesting, however, was that the more variable a participants’ mind-wandering, the greater error-monitoring capacity (top left). This suggests that individuals who show more fluctuation between internally and externally oriented attention may be able to better enjoy the benefits of mind-wandering while simultaneously limiting its costs. Of course, these are only individual differences (i.e. correlations) and should be treated as highly preliminary. It is possible for example that participants who use more of the TUT scale have higher meta-cognitive ability in general, rather than the two variables being causally linked in the way we suggest.  We are careful to raise these and other limitations in the paper, but I do think this finding is a nice first step.

To ‘probe’ a bit further we looked at the BOLD responses to correct stops, and the parametric correlation of task-related BOLD with the TUT ratings:

Activations during correct stop trials.
Activations during correct stop trials.
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)

As you can see, correct stop trials elicit a rather canonical activation pattern on the motor-inhibition and salience networks, with concurrent deactivations in visual cortex and the default mode network (second figure, blue blobs). I think of this pattern a bit like when the brain receives the ‘stop signal’ it goes, (a la Picard): “FULL STOP, MAIN VIEWER OFF, FIRE THE PHOTON TORPEDOS!”, launching into full response recovery mode. Interestingly, while we replicated the finding of medial-prefrontal co-variation with TUTS (second figure, red blob), this area was substantially more rostral than the stop-related deactivations, supporting previous findings of some degree of functional segregation between the inhibitory and mind-wandering related components of the DMN.

Finally, when examining the Aware > Unaware errors contrast, we replicated the typical salience network activations (mid-cingulate and anterior insula). Interestingly we also found strong bilateral activations in an area of the inferior parietal cortex also considered to be a part of the default mode. This finding further strengthens the link between mind-wandering and metacognition, indicating that the salience and default mode network may work in concert during conscious error awareness:

Activations to Aware > Unaware errors contrast.
Activations to Aware > Unaware errors contrast.

In all, this was a very valuable and fun study for me. As a PhD student being able to replicate the function of classic “executive, salience, and default mode” ‘resting state’ networks with a basic task was a great experience, helping me place some confidence in these labels.  I was also able to combine a classical behavioral metacognition task with some introspective thought probes, and show that they do indeed contain valuable information about task performance and related brain processes. Importantly though, we showed that the ‘content’ of the mind-wandering reports doesn’t tell the whole story of spontaneous cognition. In the future I would like to explore this idea further, perhaps by taking a time series approach to probe the dynamics of mind-wandering, using a simple continuous feedback device that participants could use throughout an experiment. In the affect literature such devices have been used to probe the dynamics of valence-arousal when participants view naturalistic movies, and I believe such an approach could reveal even greater granularity in how the experience of mind-wandering (and it’s fluctuation) interacts with cognition. Our findings suggest that the relationship between mind-wandering and task performance may be more nuanced than mere antagonism, an important finding I hope to explore in future research.

Citation: Allen M, Smallwood J, Christensen J, Gramm D, Rasmussen B, Jensen CG, Roepstorff A and Lutz A (2013) The balanced mind: the variability of task-unrelated thoughts predicts error monitoringFront. Hum. Neurosci7:743. doi: 10.3389/fnhum.2013.00743

Is the resting BOLD signal physiological noise? What about resting EEG?

Over the past 5 years, resting-state fMRI (rsfMRI) has exploded in popularity. Literally dozens of papers are published each day examining slow (< .1 hz) or “low frequency” fluctuations in the BOLD signal. When I first moved to Europe I was caught up in the somewhat North American frenzy of resting state networks. I couldn’t understand why my Danish colleagues, who specialize in modelling physiological noise in fMRI, simply did not take the literature seriously. The problem is essentially that the low frequencies examined in these studies are the same as those that dominate physiological rhythms. Respiration and cardiac pulsation can make up a massive amount of variability in the BOLD signal. Before resting state fMRI came along, nearly every fMRI study discarded any data frequencies lower than one oscillation every 120 seconds (e.g. 1/120 Hz high pass filtering). Simple things like breath holding and pulsatile motion in vasculature can cause huge effects in BOLD data, and it just so happens that these artifacts (which are non-neural in origin) tend to pool around some of our favorite “default” areas: medial prefrontal cortex, insula, and other large gyri near draining veins.

Naturally this leads us to ask if the “resting state networks” (RSNs) observed in such studies are actually neural in origin, or if they are simply the result of variations in breath pattern or the like. Obviously we can’t answer this question with fMRI alone. We can apply something like independent component analysis (ICA) and hope that it removes most of the noise- but we’ll never really be 100% sure we’ve gotten it all that way. We can measure the noise directly (e.g. “nuisance covariance regression”) and include it in our GLM- but much of the noise is likely to be highly correlated with the signal we want to observe. What we need are cross-modality validations that low-frequency oscillations do exist, that they drive observed BOLD fluctuations, and that these relationships hold even when controlling for non-neural signals. Some of this is already established- for example direct intracranial recordings do find slow oscillations in animal models. In MEG and EEG, it is well established that slow fluctuations exist and have a functional role.

So far so good. But what about in fMRI? Can we measure meaningful signal while controlling for these factors? This is currently a topic of intense research interest. Marcus Raichle, the ‘father’ of the default mode network, highlights fascinating multi-modal work from a Finnish group showing that slow fluctuations in behavior and EEG signal coincide (Raichle and Snyder 2007; Monto, Palva et al. 2008). However, we should still be cautious- I recently spoke to a post-doc from the Helsinki group about the original paper, and he stressed that slow EEG is just as contaminated by physiological artifacts as fMRI. Except that the problem is even worse, because in EEG the artifacts may be several orders of magnitude larger than the signal of interest[i].

Understandably I was interested to see a paper entitled “Correlated slow fluctuations in respiration, EEG, and BOLD fMRI” appear in Neuroimage today (Yuan, Zotev et al. 2013). The authors simultaneously collected EEG, respiration, pulse, and resting fMRI data in 9 subjects, and then perform cross-correlation and GLM analyses on the relationship of these variables, during both eyes closed and eyes open rest. They calculate Respiratory Volume per Time (RVT), a measure developed by Rasmus Birn, to assign a respiratory phase to each TR (Birn, Diamond et al. 2006). One key finding is that the global variations in EEG power are strongly predicted by RVT during eyes closed rest, with a maximum peak correlation coefficient of .40. Here are the two time series:


You can clearly see that there is a strong relationship between global alpha (GFP) and respiration (RVT). The authors state that “GFP appears to lead RVT” though I am not so sure. Regardless, there is a clear relationship between eyes closed ‘alpha’ and respiration. Interestingly they find that correlations between RVT and GFP with eyes open were not significantly different from chance, and that pulse did not correlate with GFP. They then conduct GLM analyses with RVT and GFP as BOLD regressors. Here is what their example subject looked like during eyes-closed rest:


Notice any familiar “RSNs” in the RVT map? I see anti-correlated executive deactivation and default mode activation! Very canonical.  Too bad they are breath related. This is why noise regression experts tend to dislike rsfMRI, particularly when you don’t measure the noise. We also shouldn’t be too surprised that the GFP-BOLD and RVT-BOLD maps look similar, considering that GFP and RVT are highly correlated. After looking at these correlations separately, Yuan et al perform RETROICOR physiological noise correction and then reexamine the contrasts. Here are the group maps:


Things look a bit less default-mode-like in the group RVT map, but the RVT and GFP maps are still clearly quite similar. In panel D you can see that physiological noise correction has a large global impact on GFP-BOLD correlations, suggesting that quite a bit of this co-variance is driven by physiological noise. Put simply, respiration is explaining a large degree of alpha-BOLD correlation; any experiment not modelling this covariance is likely to produce strongly contaminated results. Yuan et al go on to examine eyes-open rest and show that, similar to their RVT-GFP cross-correlation analysis, not nearly as much seems to be happening in eyes open compared to closed:


The authors conclude that “In particular, this correlation between alpha EEG and respiration is much stronger in eyes-closed resting than in eyes-open resting” and that “[the] results also suggest that eyes-open resting may be a more favorable condition to conduct brain resting state fMRI and for functional connectivity analysis because of the suppressed correlation between low-frequency respiratory fluctuation and global alpha EEG power, therefore the low-frequency physiological noise predominantly of non-neuronal origin can be more safely removed.” Fair enough- one conclusion is certainly that eyes closed rest seems much more correlated with respiration than eyes open. This is a decent and useful result of the study. But then they go on to make this really strange statement, which appears in the abstract, introduction, and discussion:

“In addition, similar spatial patterns were observed between the correlation maps of BOLD with global alpha EEG power and respiration. Removal of respiration related physiological noise in the BOLD signal reduces the correlation between alpha EEG power and spontaneous BOLD signals measured at eyes-closed resting. These results suggest a mutual link of neuronal origin between the alpha EEG power, respiration, and BOLD signals”’ (emphasis added)

That’s one way to put it! The logic here is that since alpha = neural activity, and respiration correlates with alpha, then alpha must be the neural correlate of respiration. I’m sorry guys, you did a decent experiment, but I’m afraid you’ve gotten this one wrong. There is absolutely nothing that implies alpha power cannot also be contaminated by respiration-related physiological noise. In fact it is exactly the opposite- in the low frequencies observed by Yuan et al the EEG data is particularly likely to be contaminated by physiological artifacts! And that is precisely what the paper shows – in the author’s own words: “impressively strong correlations between global alpha and respiration”. This is further corroborated by the strong similarity between the RVT-BOLD and alpha-BOLD maps, and the fact that removing respiratory and pulse variance drastically alters the alpha-BOLD correlations!

So what should we take away from this study? It is of course inconclusive- there are several aspects of the methodology that are puzzling to me, and sadly the study is rather under-powered at n = 9. I found it quite curious that in each of the BOLD-alpha maps there seemed to be a significant artifact in the lateral and posterior ventricles, even after physiological noise correction (check out figure 2b, an almost perfect ventricle map). If their global alpha signal is specific to a neural origin, why does this artifact remain even after physiological noise correction? I can’t quite put my finger on it, but it seems likely to me that some source of noise remained even after correction- perhaps a reader with more experience in EEG-fMRI methods can comment. For one thing their EEG motion correction seems a bit suspect, as they simply drop outlier timepoints. One way or another, I believe we should take one clear message away from this study – low frequency signals are not easily untangled from physiological noise, even in electrophysiology. This isn’t a damnation of all resting state research- rather it is a clear sign that we need be to measuring these signals to retain a degree of control over our data, particularly when we have the least control at all.


Birn, R. M., J. B. Diamond, et al. (2006). “Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.” Neuroimage 31(4): 1536-1548.

Monto, S., S. Palva, et al. (2008). “Very slow EEG fluctuations predict the dynamics of stimulus detection and oscillation amplitudes in humans.” The Journal of Neuroscience 28(33): 8268-8272.

Raichle, M. E. and A. Z. Snyder (2007). “A default mode of brain function: a brief history of an evolving idea.” Neuroimage 37(4): 1083-1090.

Yuan, H., V. Zotev, et al. (2013). “Correlated Slow Fluctuations in Respiration, EEG, and BOLD fMRI.” NeuroImage pp. 1053-8119.


[i] Note that this is not meant to be in anyway a comprehensive review. A quick literature search suggests that there are quite a few recent papers on resting BOLD EEG. I recall a well done paper by a group at the Max Planck Institute that did include noise regressors, and found unique slow BOLD-EEG relations. I cannot seem to find it at the moment however!


Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

important update: Thanks to commenter “DS”, I discovered that my respiration-related data was strongly contaminated due to mechanical error. The belt we used is very susceptible to becoming uncalibrated, if the subject moves or breathes very deeply for example. When looking at the raw timecourse of respiration I could see that many subjects, included the one displayed here, show a great deal of “clipping” in the timeseries. For the final analysis I will not use the respiration regressors, but rather just the pulse and motion. Thanks DS!

As I’m working my way through my latest fMRI analysis, I thought it might be fun to share a little bit of that here. Right now i’m coding up a batch pipeline for data from my Varela-award project, in which we compared “adept” meditation practitioners with motivation, IQ, age, and gender-matched controls on a response-inhibition and error monitoring task. One thing that came up in the project proposal meeting was a worry that, since meditation practitioners spend so much time working with the breath, they might respirate differently either at rest or during the task. As I’ve written about before, respiration and other related physiological variables such as cardiac-pulsation induced motion can seriously impact your fMRI results (when your heart beats, the veins in your brain pulsate, creating slight but consistent and troublesome MR artifacts). As you might expect, these artifacts tend to be worse around the main draining veins of the brain, several of which cluster around the frontoinsular and medial-prefrontal/anterior cingulate cortices. As these regions are important for response-inhibition and are frequently reported in the meditation literature (without physiological controls), we wanted to try to control for these variables in our study.

disclaimer: i’m still learning about noise modelling, so apologies if I mess up the theory/explanation of the techniques used! I’ve left things a bit vague for that reason. See bottom of article for references for further reading. To encourage myself to post more of these “open-lab notes” posts, I’ve kept the style here very informal, so apologies for typos or snafus. 😀

To measure these signals, we used the respiration belt and pulse monitor that come standard with most modern MRI machines. The belt is just a little elastic hose that you strap around the chest wall of the subject, where it can record expansions and contractions of the chest to give a time series corresponding to respiration, and the pulse monitor a standard finger clip. Although I am not an expert on physiological noise modelling, I will do my best to explain the basic effects you want to model out of your data. These “non-white” noise signals include pulsation and respiration-induced motion (when you breath, you tend to nod your head just slightly along the z-axis), typical motion artifacts, and variability of pulsation and respiration. To do this I fed my physiological parameters into an in-house function written by Torben Lund, which incorporates a RETROICOR transformation of the pulsation and respiration timeseries. We don’t just use the raw timeseries due to signal aliasing- the phsyio data needs to be shifted to make each physiological event correspond to a TR. The function also calculates the respiratory volume time delay (RVT), a measure developed by Rasmus Birn, to model the variability in physiological parameters1. Variability in respiration and pulse volume (if one group of subjects tend to inhale sharply for some conditions but not others, for example) is more likely to drive BOLD artifacts than absolute respiratory volume or frequency (if one group of subjects tend to inhale sharply for some conditions but not others, for example). Finally, as is standard, I included the realignment parameters to model subject motion-related artifacts. Here is a shot of my monster design matrix for one subject:


You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars, and finally my session offsets and constant. It’s a big DM, but since we have over 1000 degrees of freedom, i’m not too worried about all the extra regressors in terms of loss of power. What would be worrisome is if for example stop activity correlated strongly with any of the nuisance variables –  we can see from the orthogonality plot that in this subject at least, that is not the case. Now lets see if we actually have anything interesting left over after we remove all that noise:

stop SPM

We can see that the Stop-related activity seems pretty reasonable, clustering around the motor and premotor cortex, bilateral insula, and DLPFC, all canonical motor inhibition regions (FWE-cluster corrected p = 0.05). This is a good sign! Now what about all those physiological regressors? Are they doing anything of value, or just sucking up our power? Here is the f-contrast over the pulse regressors:


Here we can see that the peak signal is wrapped right around the pons/upper brainstem. This makes a lot of sense- the area is full of the primary vasculature that ferries blood into and out of the brain. If I was particularly interested in getting signal from the brainstem in this project, I could use a respiration x pulse interaction regressor to better model this6. Penny et al find similar results to our cardiac F-test when comparing AR(1) with higher order AR models [7]. But since we’re really only interested in higher cortical areas, the pulse regressor should be sufficient. We can also see quite a bit of variance explained around the bilateral insula and rostral anterior cingulate. Interestingly, our stop-related activity still contained plenty of significant insula response, so we can feel better that some but not all of the signal from that region is actually functionally relevant. What about respiration?


Here we see a ton of variance explained around the occipital lobe. This makes good sense- we tend to just slightly nod our head back and forth along the z-axis as we breath. What we are seeing is the motion-induced artifact of that rotation, which is most severe along the back of the head and periphery of the brain. We see a similar result for the overall motion regressors, but flipped to the front:

Ignore the above, respiration regressor is not viable due to “clipping”, see note at top of post. Glad I warned everyone that this post was “in progress” 🙂 Respiration should be a bit more global, restricted to ventricles and blood vessels.


Wow, look at all the significant activity! Someone call up Nature and let them know, motion lights up the whole brain! As we would expect, the motion regressor explains a ton of uninteresting variance, particularly around the prefrontal cortex and periphery.

I still have a ways to go on this project- obviously this is just a single subject, and the results could vary wildly. But I do think even at this point we can start to see that it is quite easy and desirable to model these effects in your data (Note: we had some technical failure due to the respiration belt being a POS…) I should note that in SPM, these sources of “non-white” noise are typically modeled using an autoregressive (AR(1)) model, which is enabled in the default settings (we’ve turned it off here). However as there is evidence that this model performs poorly at faster TRs (which are the norm now), and that a noise-modelling approach can greatly improve SnR while removing artifacts, we are likely to get better performance out of a nuisance regression technique as demonstrated here [4]. The next step will be to take these regressors to a second level analysis, to examine if the meditation group has significantly more BOLD variance-explained by physiological noise than do controls. Afterwards, I will re-run the analysis without any physio parameters, to compare the results of both.


1. Birn RM, Diamond JB, Smith MA, Bandettini PA.
Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.
Neuroimage. 2006 Jul 15;31(4):1536-48. Epub 2006 Apr 24.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

3. Glover GH, Li TQ, Ress D.
Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR.
Magn Reson Med. 2000 Jul;44(1):162-7.

4. Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE.
Non-white noise in fMRI: does modelling have an impact?
Neuroimage. 2006 Jan 1;29(1):54-66.

5. Wise RG, Ide K, Poulin MJ, Tracey I.
Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal.
Neuroimage. 2004 Apr;21(4):1652-64.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

7. Penny, W., Kiebel, S., & Friston, K. (2003). Variational Bayesian inference for fMRI time series. NeuroImage, 19(3), 727–741. doi:10.1016/S1053-8119(03)00071-5

Mindfulness and neuroplasticity – summary of my recent paper.

First, let me apologize for an overlong hiatus from blogging. I submitted my PhD thesis October 1st, and it turns out that writing two papers and a thesis in the space of about three months can seriously burn out the old muse. I’ve coaxed her back through gentle offerings of chocolate, caffeine, and a bit of videogame binging. As long as I promise not to bring her within a mile of a dissertation, I believe we’re good for at least a few posts per month.

With that taken care of, I am very happy to report the successful publication of my first fMRI paper, published last month in the Journal of Neuroscience. The paper was truly a labor of love taking nearly 3 years to complete and countless hours of head-scratching work. In the end I am quite happy with the finished product, and I do believe my colleagues and I managed to produce a useful result for the field of mindfulness training and neuroplasticity.

note: this post ended up being quite long. if you are already familiar with mindfulness research, you may want to skip ahead!

Why mindfulness?

First, depending on what brought you here, you may already be wondering why mindfulness is an interesting subject, particularly for a cognitive neuroscientist. In light of the large gaps regarding our understanding of the neurobiological foundations of neuroimaging, is it really the right time to apply these complex tools to meditation?  Can we really learn anything about something as potentially ambiguous as “mindfulness”? Although we have a long way to go, and these are certainly fair questions, I do believe that the study of meditation has a lot to contribute to our understanding of cognition and plasticity.

Generally speaking, when you want to investigate some cognitive phenomena, a firm understanding of your target is essential to successful neuroimaging. Areas with years of behavioral research and concrete theoretical models make for excellent imaging subjects, as in these cases a researcher can hope to fall back on a sort of ‘ground truth’ to guide them through the neural data, which are notoriously ambiguous and difficult to interpret. Of course well-travelled roads also have their disadvantages, sometimes providing a misleading sense of security, or at least being a bit dry. While mindfulness research still has a ways to go, our understanding of these practices is rapidly evolving.

At this point it helps to stop and ask, what is meditation (and by extension, mindfulness)? The first thing to clarify is that there is no such thing as “meditation”- rather meditation is really term describing a family resemblance of highly varied practices, covering an array of both spiritual and secular practices. Meditation or “contemplative” practices have existed for more than a thousand years and are found in nearly every spiritual tradition. More recently, here in the west our unending fascination of the esoteric has lead to a popular rise in Yoga, Tai Chi, and other physically oriented contemplative practices, all of which incorporate an element of meditation.

At the simplest level of description [mindfulness] meditation is just a process of becoming aware, whether through actual sitting meditation, exercise, or daily rituals.  Meditation (as a practice) was first popularized in the west during the rise of transcendental meditation (TM). As you can see in the figure below, interest in TM lead to an early boom in research articles. This boom was not to last, as it was gradually realized that much of this initially promising research was actually the product of zealous insiders, conducted with poor controls and in some cases outright data fabrication. As TM became known as  a cult, meditation research underwent a dark age where publishing on the topic could seriously damage a research career. We can see also that around the 1990’s, this trend started to reverse as a new generation of researchers began investigating “mindfulness” meditation.

pubmed graphy thing
Sidenote: research everywhere is expanding. Shouldn’t we start controlling these highly popular “pubs over time” figures for total publishing volume? =)

It’s easy to see from the above why when Jon Kabat-Zinn re-introduced meditation to the West, he relied heavily on the medical community to develop a totally secularized intervention-oriented version of meditation strategically called “mindfulness-based stress reduction.” The arrival of MBSR was closely related to the development of mindfulness-based cognitive therapy (MBCT), a revision of cognitive-behavioral therapy utilizing mindful practices and instruction for a variety of clinical applications. Mindfulness practice is typically described as involving at least two practices; focused attention (FA) and open monitoring (OM). FA can be described as simply noticing when attention wanders from a target (the breath, the body, or a flower for example) and gently redirecting it back to that target. OM is typically (but not always) trained at an later stage, building on the attentional skills developed in FA practice to gradually develop a sense of “non-judgmental open awareness”. While a great deal of work remains to be done, initial cognitive-behavioral and clinical research on mindfulness training (MT) has shown that these practices can improve the allocation of attentional resources, reduce physiological stress, and improve emotional well-being. In the clinic MT appears to effectively improve symptoms on a variety of pathological syndromes including anxiety and depression, at least as well as standard CBT or pharmacological treatments.

Has the quality of research on meditation improved since the dark days of TM? When answering this question it is important to note two things about the state of current mindfulness research. First, while it is true that many who research MT are also practitioners, the primary scholars are researchers who started in classical areas (emotion, clinical psychiatry, cognitive neuroscience) and gradually became involved in MT research. Further, most funding today for MT research comes not from shady religious institutions, but from well-established funding bodies such as the National Institute of Health and European Research Council. It is of course important to be aware of the impact prior beliefs can have on conducting impartial research, but with respect to today’s meditation and mindfulness researchers, I believe that most if not all of the work being done is honest, quality research.

However, it is true that much of the early MT research is flawed on several levels. Indeed several meta-analyses have concluded that generally speaking, studies of MT have often utilized poor design – in one major review only 8/22 studies met criteria for meta-analysis. The reason for this is quite simple- in the absence of pilot data, investigators had to begin somewhere. Typically it doesn’t bode well to jump into unexplored territory with an expensive, large sample, fully randomized design. There just isn’t enough to go off of- how would you know which kind of process to even measure? Accordingly, the large majority of mindfulness research to date has utilized small-scale, often sub-optimal experimental design, sacrificing experimental control in order build a basic idea of the cognitive landscape. While this exploratory research provides a needed foundation for generating likely hypotheses, it is also difficult to make any strong conclusions so long as methodological issues remain.

Indeed, most of what we know about these mindfulness and neuroplasticity comes from studies of either advanced practitioners (compared to controls) or “wait-list” control studies where controls receive no intervention. On the basis of the findings from these studies, we had some idea how to target our investigation, but there remained a nagging feeling of uncertainty. Just how much of the literature would actually replicate? Does mindfulness alter attention through mere expectation and motivation biases (i.e. placebo-like confounds), or can MT actually drive functionally relevant attentional and emotional neuroplasticity, even when controlling for these confounds?

The name of the game is active-control

Research to date links mindfulness practices to alterations in health and physiology, cognitive control, emotional regulation, responsiveness to pain, and a large array of positive clinical outcomes. However, the explicit nature of mindfulness training makes for some particularly difficult methodological issues. Group cross-sectional studies, where advanced practitioners are compared to age-matched controls, cannot provide causal evidence. Indeed, it is always possible that having a big fancy brain makes you more likely to spend many years meditating, and not that meditating gives you a big fancy brain. So training studies are essential to verifying the claim that mindfulness actually leads to interesting kinds of plasticity. However, unlike with a new drug study or computerized intervention, you cannot simply provide a sugar pill to the control group. Double-blind design is impossible; by definition subjects will know they are receiving mindfulness. To actually assess the impact of MT on neural activity and behavior, we need to compare to groups doing relatively equivalent things in similar experimental contexts. We need an active control.

There is already a well-established link between measurement outcome and experimental demands. What is perhaps less appreciated is that cognitive measures, particularly reaction time, are easily biased by phenomena like the Hawthorne effect, where the amount of attention participants receive directly contributes to experimental outcome. Wait-lists simply cannot overcome these difficulties. We know for example, that simply paying controls a moderate performance-based financial reward can erase attentional reaction-time differences. If you are repeatedly told you’re training attention, then come experiment time you are likely expect this to be true and try harder than someone who has received no such instruction. The same is true of emotional tasks; subjects told frequently they are training compassion are likely to spend more time fixating on emotional stimuli, leading to inflated self-reports and responses.

I’m sure you can quickly see how it is extremely important to control for these factors if we are to isolate and understand the mechanisms important for mindfulness training. One key solution is active-control, that is providing both groups (MT and control) with a “treatment” that is at least nominally as efficacious as the thing you are interested in. Active-control allows you exclude numerous factors from your outcome, potentially including the role of social support, expectation, and experimental demands. This is exactly what we set out to do in our study, where we recruited 60 meditation-naïve subjects, scanned them on an fMRI task, randomized them to either six weeks of MT or active-control, and then measured everything again. Further, to exclude confounds relating to social interaction, we came up with a particularly unique control activity- reading Emma together.

Jane Austen as Active Control – theory of mind vs interoception

To overcome these confounds, we constructed a specialized control intervention. As it was crucial that both groups believed in their training, we needed an instructor who could match the high level of enthusiasm and experience found in our meditation instructors. We were lucky to have the help of local scholar Mette Stineberg, who suggested a customized “shared reading” group to fit our purposes. Reading groups are a fun, attention demanding exercise, with purported benefits for stress and well-being. While these claims have not been explicitly tested, what mattered most was that Mette clearly believed in their efficacy- making for a perfect control instructor. Mette holds a PhD in literature, and we knew that her 10 years of experience participating in and leading these groups would help us to exclude instructor variables from our results.

With her help, we constructed a special condition where participants completed group readings of Jane Austin’s Emma. A sensible question to ask at this point is – “why Emma?” An essential element of active control is variable isolation, or balancing your groups in such way that, with the exception of your hypothesized “active ingredient”, the two interventions are extremely similar. As MT is thought to depend on a particular kind of non-judgmental, interoceptive kind of attention, Chris and Uta Frith suggested during an early meeting that Emma might be a perfect contrast. For those of you who haven’t read the novel, the plot is brimming over with judgment-heavy theory-of-mind-type exposition. Mette further helped to ensure a contrast with MT by emphasizing discussion sessions focused on character motives. In this way we were able to ensure that both groups met for the same amount of time each week, with equivalently talented and passionate instructors, and felt that they were working towards something worthwhile. Finally, we made sure to let every participant know at recruitment that they would receive one of two treatments intended to improve attention and well-being, and that any benefits would depend upon their commitment to the practice. To help them practice at home, we created 20-minute long CD’s for both groups, one with a guided meditation and the other with a chapter from Emma.

Unlike previous active-controlled studies that typically rely on relaxation training, reading groups depend upon a high level of social-interaction. Reading together allowed us not only to exclude treatment context and expectation from our results, but also more difficult effects of social support (the “making new friends” variable). To measure this, we built a small website for participants to make daily reports of their motivation and minutes practiced that day. As you can see in the figure below, when we averaged these reports we found that not only did the reading group practice significantly more than those in MT, but that they expressed equivalent levels of motivation to practice. Anecdotally we found that reading-group members expressed a high level of satisfaction with their class, with a sub-group of about 8 even continued their meetings after our study concluded. The meditation group by comparison, did not appear to form any lasting social relationships and did not continue meeting after the study. We were very happy with these results, which suggest that it is very unlikely our results could be explained by unbalanced motivation or expectation.

Impact of MT on attention and emotion

After we established that active control was successful, the first thing to look at was some of our outside-the-scanner behavioral results. As we were interested in the effect of meditation on both attention and meta-cognition, we used an “error-awareness task” (EAT) to examine improvement in these areas. The EAT (shown below) is a typical “go-no/go” task where subjects spend most of their time pressing a button. The difficult part comes whenever a “stop-trial” occurs and subject must quickly halt their response. In the case where the subject fails to stop, they then have the opportunity to “fix” the error by pressing a second button on the trial following the error. If you’ve ever taken this kind of task, you know that it can be frustratingly difficult to stop your finger in time – the response becomes quite habitual. Using the EAT we examined the impact of MT on both controlling responses (a variable called “stop accuracy”), as well as also on meta-cognitive self-monitoring (percent “error-awareness”).

The error-awareness task

We started by looking for significant group by time interactions on stop accuracy and error-awareness, which indicate that score fluctuation on a measure was statistically greater in the treatment (MT) group than in the control group. In repeated-measures design, this type of interaction is your first indication that the treatment may have had a greater effect than the control group. When we looked at the data, it was immediately clear that while both groups improved over time (a ‘main effect’ of time), there was no interaction to be found:

Group x time analysis of SA and EA.

While it is likely that much of the increase over time can be explained by test-retest effects (i.e. simply taking the test twice), we wanted to see if any of this variance might be explained by something specific to meditation. To do this we entered stop accuracy and error-awareness into a linear model comparing the difference of slope between each group’s practice and the EAT measures. Here we saw that practice predicted stop accuracy improvement only in the meditation group, and that the this relationship was statistically greater than in the reading group:

Practice vs Stop accuracy (MT only shown). We did of course test our interaction, see paper for GLM goodness =)

These results lead us to conclude that while we did not observe a treatment effect of MT on the error-awareness task, the presence of strong time effects and MT-only correlation with practice suggested that the improvements within each group may relate to the “active ingredients” of MT but reflect motivation-driven artifacts in the reading group. Sadly we cannot conclude this firmly- we’d have needed to include a third passive control group for comparison. Thankfully this was pointed out to us by a kind reviewer, who noted that this argument is kind of like having one’s cake and eating it, so we’ll restrict ourselves to arguing that the EAT finding serves as a nice validation of the active control- both groups improved on something, and a potential indicator of a stop-related treatment mechanism.

While the EAT served as a behavioral measure of basic cognitive processes, we also wanted to examine the neural correlates of attention and emotion, to see how they might respond to mindfulness training in our intervention. For this we partnered with Karina Blair at the National Institute of Mental Health to bring the Affective Stroop task (shown below) to Denmark .

Affective Stroop Trial Scheme

The Affective Stroop Task (AST) depends on a basic “number-counting Stroop” to investigate the neural correlates of attention, emotion, and their interaction. To complete the task, your instruction is simply “count the number of numbers in the first display (of numbers), count the number of numbers in the second display, and decide which display had more number of numbers”.  As you can see in the trial example above, conflict in the task (trial-type “C”) is driven by incongruence between the Arabic numeral (e.g. “4”) and the numeracy of the display (a display of 5 “4”’s). Meanwhile, each trial has nasty or neutral emotional stimuli selected from the international affective picture system. Using the AST, we were able to examine the neural correlates of executive attention by contrasting task (B + C > A) and emotion (negative > neutral) trials.

Since we were especially interested in changes over time, we expanded on these contrasts to examine increased or decreased neural response between the first and last scans of the study. To do this we relied on two levels of analysis (standard in imaging), where at the “first” or “subject level” we examined differences between the two time points for each condition (task and emotion), within each subject. We then compared these time-related effects (contrast images) between each group using a two-sample t-test with total minutes of practice as a co-variate. To assess the impact of meditation on performing the AST, we examined reaction times in a model with factors group, time, task, and emotion. In this way we were able to examine the impact of MT on neural activity and behavior while controlling for the kinds of artifacts discussed in the previous section.

Our analysis revealed three primary findings. First, the reaction time analysis revealed a significant effect of MT on Stroop conflict, or the difference between reaction time to incongruent versus congruent trials. Further, we did not observe any effect on emotion-related RTs- although both groups sped up significantly to negative trials vs neutral (time effect), this increase was equivalent in both groups. Below you can see the stroop-conflict related RTs:

Stroop conflict result

This became particularly interesting when we examine the neural response to these conditions, and again observed a pattern of overall [BOLD signal] increases in the dorsolateral prefrontal cortex to task performance (below):

DLPFC increase to task

Interestingly, we did not observe significant overall increases to emotional stimuli  just being in the MT group didn’t seem to be enough to change emotional processing. However, when we examined correlations with amount practice and increased BOLD to negative emotion across the whole brain, we found a striking pattern of fronto-insular BOLD increases to negative images, similar to patterns seen in previous studies of compassion and mindfulness practice:

Greater association of prefrontal-insular response to negative emotion and practice
Greater association of prefrontal-insular response to negative emotion and practice.

When we put all this together, a pattern began to emerge. Overall it seemed like MT had a relatively clear impact on attention and cognitive control. Practice-correlated increases on EAT stop accuracy, reduced Affective Stroop conflict, and increases in dorsolateral prefrontal cortex responses to task all point towards plasticity at the level of executive function. In contrast our emotion-related findings suggest that alterations in affective processing occurred only in MT participants with the most practice. Given how little we know about the training trajectories of cognitive vs affective skills, we felt that this was a very interesting result.

Conclusion: the more you do, the what you get?

For us, the first conclusion from all this was that when you control for motivation and a host of other confounds, brief MT appears to primarily train attention-related processes. Secondly, alterations in affective processing seemed to require more practice to emerge. This is interesting both for understanding the neuroscience of training and for the effective application of MT in clinical settings. While a great deal of future research is needed, it is possible that the affective system may be generally more resilient to intervention than attention. It may be the case that altering affective processes depends upon and extends increasing control over executive function. Previous research suggests that attention is largely flexible, amenable to a variety of training regimens of which MT is only one beneficial intervention. However we are also becoming increasingly aware that training attention alone does not seem to directly translate into closely related benefits.

As we begin to realize that many societal and health problems cannot be solved through medication or attention-training alone, it becomes clear that techniques to increase emotional function and well-being are crucial for future development.  I am reminded of a quote overheard at the Mind & Life Summer Research Institute and attributed to the Dalai Lama. Supposedly when asked about their goal of developing meditation programs in the west, HHDL replied that, what was truly needed in the West was not “cognitive training, as (those in the west) are already too clever. What is needed rather is emotion training, to cultivate a sense of responsibility and compassion”. When we consider falling rates of empathy in medical practitioners and the link to health outcome, I think we do need to explore the role of emotional and embodied skills in supporting a wide-array of functions in cognition and well-being. While emotional development is likely to depend upon executive function, given all the recent failures to show a transfer from training these domains to even closely related ones, I suspect we need to begin including affective processes in our understanding of optimal learning. If these differences hold, then it may be important to reassess our interventions (mindful and otherwise), developing training programs that are customized in terms of the intensity, duration, and content appropriate for any given context.

Of course, rather than end on such an inspiring note, I should point out that like any study, ours is not without flaws (you’ll have to read the paper to find out how many 😉 ) and is really just an initial step. We made significant progress in replicating common neural and behavioral effects of MT while controlling for important confounds, but in retrospect the study could have been strengthened by including measures that would better distinguish the precise mechanisms, for example a measure of body awareness or empathy. Another element that struck me was how much I wish we’d had a passive control group, which could have helped flesh out how much of our time effect was instrument reliability versus motivation. As far as I am concerned, the study was a success and I am happy to have done my part to push mindfulness research towards methodological clarity and rigor. In the future I know others will continue this trend and investigate exactly what sorts of practice are needed to alter brain and behavior, and just how these benefits are accomplished.

In the near-future, I plan to give mindfulness research a rest. Not that I don’t find it fascinating or worthwhile, but rather because during the course of my PhD I’ve become a bit obsessed with interoception and meta-cognition. At present, it looks like I’ll be spending my first post-doc applying predictive coding and dynamic causal modeling to these processes. With a little luck, I might be able to build a theoretical model that could one day provide novel targets for future intervention!

Link to paper:

Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention

Thanks to all the collaborators and colleagues who made this study possible.

Special thanks to Kate Mills (@le_feufollet) for proofing this post 🙂