Effective connectivity or just plumbing? Granger Causality estimates highly reliable maps of venous drainage.

update: for an excellent response to this post, see the comment by Anil Seth at the bottom of this article. Also don’t miss the extended debate regarding the general validity of causal methods for fMRI at Russ Poldrack’s blog that followed this post. 

While the BOLD signal can be a useful measurement of brain function when used properly, the fact that it indexes blood flow rather than neural activity raises more than a few significant concerns. That is to say, when we make inferences on BOLD, we want to be sure the observed effects are causally downstream of actual neural activity, rather than the product of physiological noise such as fluctuations in breath or heart rate. This is a problem for all fMRI analyses, but is particularly tricky for resting state fMRI, where we are interested in signal fluctuations that fall in the same range as respiration and pulse. Now a new study has extended these troubles to granger causality modelling (GCM), a lag-based method for estimating causal interactions between time series, popular in the resting state literature. Just how bad is the damage?

In an article published this week in PLOS ONE, Webb and colleagues analysed over a thousand scans from the Human Connectome database, examining the reliability of GCM estimates and the proximity of the major ‘hubs’ identified by GCM with known major arteries and veins. The authors first found that GCM estimates were highly robust across participants:

Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).
Plot showing robustness of GCM estimates across 620 participants. The majority of estimated causes did not show significant differences within or between participants (black datapoints).

They further report that “the largest [most robust] lags are for BOLD Granger causality differences for regions close to large veins and dural venous sinuses”. In other words, although the major ‘upstream’ and ‘downstream’ nodes estimated by GCM are highly robust across participants, regions primarily effecting other regions (e.g. causal outflow) map onto major arteries, whereas regions primarily receiving ‘inputs’  (e.g.  causal inflow) map onto veins. This pattern of ‘causation’ is very difficult to explain as anything other than a non-neural artifact, as it seems like the regions mostly ‘causing’ activity in others are exactly where you would have fresh blood coming into the brain, and regions primarily being influenced by others seem to be areas of major blood drainage. Check out the arteriogram and venogram provided by the authors:

Depiction of major arteries (top image) and veins (bottom). Not overlap with areas of greatest G-cause (below).
Depiction of major arteries (top image) and veins (bottom). Note overlap with areas of greatest G-cause (below).

Compare the above to their thresholded z-statistic map for significant granger causality; white are areas of significant g-causation overlapping with an ateriogram mask, green are significant areas overlapping with a venogram mask:

journal.pone.0084279.g005
From paper:
“Figure 5. Mean Z-statistic for significant Granger causality differences to seed ROIs. Z-statistics were averaged for a given target ROI with the 264 seed ROIs to which it exhibited significantly asymmetric Granger causality relationship. Masks are overlaid for MRI arteriograms (white) and MRI venograms (green) for voxels with greater than 2 standard deviations signal intensity of in-brain voxels in averaged images from 33 (arteriogram) and 34 (venogram) subjects. Major arterial inflow and venous outflow distributions are labeled.”

It’s fairly obvious from the above that a significant proportion of the areas typically G-causing other areas overlap with arteries, whereas areas typically being g-caused by others overlap with veins. This is a serious problem for GCM of resting state fMRI, and worse, these effects were also observed for a comprehensive range of task-based fMRI data. The authors come to the grim conclusion that “Such arterial inflow and venous drainage has a highly reproducible pattern across individuals where major arterial and venous distributions are largely invariant across subjects, giving the illusion of reliable timing differences between brain regions that may be completely unrelated to actual differences in effective connectivity”. Importantly, this isn’t the first time GCM has been called into question. A related concern is the impact of spatial variation in the lag between neural activation and the BOLD response (the ‘hemodynamic response function’, HRF) across the brain. Previous work using simultaneous intracranial and BOLD recordings has shown that due to these lags, GCM can estimate a causal pattern of A then B, whereas the actual neural activity was B then A.

This is because GCM acts in a relatively simple way; given two time-series (A & B), if a better estimate of the future state of B can be predicted by the past fluctation of both A and B than that provided by B alone, then A is said to G-cause B.  However, as we’ve already established, BOLD is a messy and complex signal, where neural activity is filtered through slow blood fluctuations that must be carefully mapped back onto to neural activity using deconvolution methods. Thus, what looks like A then B in BOLD, can actually be due to differences in HRF lags between regions – GCM is blind to this as it does not consider the underlying process producing the time-series. Worse, while this problem can be resolved by combining GCM (which is naïve to the underlying cause of the analysed time series) with an approach that de-convolves each voxel-wise time-series with a canonical HRF, the authors point out that such an approach would not resolve the concern raised here that granger causality largely picks up macroscopic temporal patterns in blood in- and out-flow:

“But even if an HRF were perfectly estimated at each voxel in the brain, the mechanism implied in our data is that similarly oxygenated blood arrives at variable time points in the brain independently of any neural activation and will affect lag-based directed functional connectivity measurements. Moreover, blood from one region may then propagate to other regions along the venous drainage pathways also independent of neural to vascular transduction. It is possible that the consistent asymmetries in Granger causality measured in our data may be related to differences in HRF latency in different brain regions, but we consider this less likely given the simpler explanation of blood moving from arteries to veins given the spatial distribution of our results.”

As for correcting for these effects, the authors suggest that a nuisance variable approach estimating vascular effects related to pulse, respiration, and breath-holding may be effective. However, they caution that the effects observed here (large scale blood inflow and drainage) take place over a timescale an order of magnitude slower than actual neural differences, and that this approach would need extremely precise estimates of the associated nuisance waveforms to prevent confounded connectivity estimates. For now, I’d advise readers to be critical of what can actually  be inferred from GCM until further research can be done, preferably using multi-modal methods capable of directly inferring the impact of vascular confounds on GCM estimates. Indeed, although I suppose am a bit biased, I have to ask if it wouldn’t be simpler to just use Dynamic Causal Modelling, a technique explicitly designed for estimating causal effects between BOLD timeseries, rather than a method originally designed to estimate influences between financial stocks.

References for further reading:

Friston, K. (2009). Causal modelling and brain connectivity in functional magnetic resonance imaging. PLoS biology, 7(2), e33. doi:10.1371/journal.pbio.1000033

Friston, K. (2011). Dynamic causal modeling and Granger causality Comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution. NeuroImage, 58(2), 303–5; author reply 310–1. doi:10.1016/j.neuroimage.2009.09.031

Friston, K., Moran, R., & Seth, A. K. (2013). Analysing connectivity with Granger causality and dynamic causal modelling. Current opinion in neurobiology, 23(2), 172–8. doi:10.1016/j.conb.2012.11.010

Webb, J. T., Ferguson, M. a., Nielsen, J. a., & Anderson, J. S. (2013). BOLD Granger Causality Reflects Vascular Anatomy. (P. A. Valdes-Sosa, Ed.)PLoS ONE, 8(12), e84279. doi:10.1371/journal.pone.0084279

Chang, C., Cunningham, J. P., & Glover, G. H. (2009). Influence of heart rate on the BOLD signal: the cardiac response function. NeuroImage, 44(3), 857–69. doi:10.1016/j.neuroimage.2008.09.029

Chang, C., & Glover, G. H. (2009). Relationship between respiration, end-tidal CO2, and BOLD signals in resting-state fMRI. NeuroImage, 47(4), 1381–93. doi:10.1016/j.neuroimage.2009.04.048

Lund, T. E., Madsen, K. H., Sidaros, K., Luo, W.-L., & Nichols, T. E. (2006). Non-white noise in fMRI: does modelling have an impact? Neuroimage, 29(1), 54–66.

David, O., Guillemain, I., Saillet, S., Reyt, S., Deransart, C., Segebarth, C., & Depaulis, A. (2008). Identifying neural drivers with functional MRI: an electrophysiological validation. PLoS biology, 6(12), 2683–97. doi:10.1371/journal.pbio.0060315

Update: This post continued into an extended debate on Russ Poldrack’s blog, where Anil Seth made the following (important) comment 

Hi this is Anil Seth.  What an excellent debate and I hope I can add few quick thoughts of my own since this is an issue close to my heart (no pub intended re vascular confounds).

First, back to the Webb et al paper. They indeed show that a vascular confound may affect GC-FMRI but only in the resting state and given suboptimal TR and averaging over diverse datasets.  Indeed I suspect that their autoregressive models may be poorly fit so that the results rather reflect a sort-of mental chronometry a la Menon, rather than GC per se.
In any case the more successful applications of GC-fMRI are those that compare experimental conditions or correlate GC with some behavioural variable (see e.g. Wen et al.http://www.ncbi.nlm.nih.gov/pubmed/22279213).  In these cases hemodynamic and vascular confounds may subtract out.
Interpreting findings like these means remembering that GC is a description of the data (i.e. DIRECTED FUNCTIONAL connectivity) and is not a direct claim about the underlying causal mechanism (e.g. like DCM, which is a measure of EFFECTIVE connectivity).  Therefore (model light) GC and (model heavy) DCM are to a large extent asking and answering different questions, and to set them in direct opposition is to misunderstand this basic point.  Karl, Ros Moran, and I make these points in a recent review (http://www.ncbi.nlm.nih.gov/pubmed/23265964).
Of course both methods are complex and ‘garbage in garbage out’ applies: naive application of either is likely to be misleading or worse.  Indeed the indirect nature of fMRI BOLD means that causal inference will be very hard.  But this doesn’t mean we shouldn’t try.  We need to move to network descriptions in order to get beyond the neo-phrenology of functional localization.  And so I am pleased to see recent developments in both DCM and GC for fMRI.  For the latter, with Barnett and Chorley I have shown that GC-FMRI is INVARIANT to hemodynamic convolution given fast sampling and low noise (http://www.ncbi.nlm.nih.gov/pubmed/23036449).  This counterintuitive finding defuses a major objection to GC-fMRI and has been established both in theory, and in a range of simulations of increasing biophysical detail.  With the development of low-TR multiband sequences, this means there is renewed hope for GC-fMRI in practice, especially when executed in an appropriate experimental design.  Barnett and I have also just released a major new GC software which avoids separate estimation of full and reduced AR models, avoiding a serious source of bias afflicting previous approaches (http://www.ncbi.nlm.nih.gov/pubmed/24200508).
Overall I am hopeful that we can move beyond premature rejection of promising methods on the grounds they fail when applied without appropriate data or sufficient care.  This applies to both GC and fMRI. These are hard problems but we will get there.

Mind-wandering and metacognition: variation between internal and external thought predicts improved error awareness

Yesterday I published my first paper on mind-wandering and metacognition, with Jonny Smallwood, Antoine Lutz, and collaborators. This was a fun project for me as I spent much of my PhD exhaustively reading the literature on mind-wandering and default mode activity, resulting in a lot of intense debate a my research center. When we had Jonny over as an opponent at my PhD defense, the chance to collaborate was simply too good to pass up. Mind-wandering is super interesting precisely because we do it so often. One of my favourite anecdotes comes from around the time I was arguing heavily for the role of the default mode in spontaneous cognition to some very skeptical colleagues.  The next day while waiting to cross the street, one such colleague rode up next to me on his bicycle and joked, “are you thinking about the default mode?” And indeed I was – meta-mind-wandering!

One thing that has really bothered me about much of the mind-wandering literature is how frequently it is presented as attention = good, mind-wandering = bad. Can you imagine how unpleasant it would be if we never mind-wandered? Just picture trying to solve a difficult task while being totally 100% focused. This kind of hyper-locking attention can easily become pathological, preventing us from altering course when our behaviour goes awry or when something internal needs to be adjusted. Mind-wandering serves many positive purposes, from stimulating our imaginations, to motivating us in boring situations with internal rewards (boring task… “ahhhh remember that nice mojito you had on the beach last year?”). Yet we largely see papers exploring the costs – mood deficits, cognitive control failure, and so on. In the meditation literature this has even been taken up to form the misguided idea that meditation should reduce or eliminate mind-wandering (even though there is almost zero evidence to this effect…)

Sometimes our theories end up reflecting our methodological apparatus, to the extent that they may not fully capture reality. I think this is part of what has happened with mind-wandering, which was originally defined in relation to difficult (and boring) attention tasks. Worse, mind-wandering is usually operationalized as a dichotomous state (“offtask” vs “ontask”) when a little introspection seems to strongly suggest it is much more of a fuzzy, dynamic transition between meta-cognitive and sensory processes. By studying mind-wandering just as the ‘amount’ (or mean) number of times you were “offtask”, we’re taking the stream of consciousness and acting as if the ‘depth’ at one point in the river is the entire story – but what about flow rate, tidal patterns, fishies, and all the dynamic variability that define the river? My idea was that one simple way get at this is by looking at the within-subject variability of mind-wandering, rather than just the overall mean “rate”.  In this way we could get some idea of the extent to which a person’s mind-wandering was fluctuating over time, rather than just categorising these events dichotomously.

The EAT task used in my study, with thought probes.
The EAT task used in my study, with thought probes.

To do this, we combined a classical meta-cognitive response inhibition paradigm, the “error awareness task” (pictured above), with standard interleaved “thought-probes” asking participants to rate on a scale of 1-7 the “subjective frequency” of task-unrelated thoughts in the task interval prior to the probe.  We then examined the relationship between the ability to perform the task or “stop accuracy” and each participant’s mean task-unrelated thought (TUT). Here we expected to replicate the well-established relationship between TUTs and attention decrements (after all, it’s difficult to inhibit your behaviour if you are thinking about the hunky babe you saw at the beach last year!). We further examined if the standard deviation of TUT (TUT variability) within each participant would predict error monitoring, reflecting a relationship between metacognition and increased fluctuation between internal and external cognition (after all, isn’t that kind of the point of metacognition?). Of course for specificity and completeness, we conducted each multiple regression analysis with the contra-variable as control predictors. Here is the key finding from the paper:

Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.
Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.

As you can see in the bottom right, we clearly replicated the relationship of increased overall TUT predicting poorer stop performance. Individuals who report an overall high intensity/frequency of mind-wandering unsurprisingly commit more errors. What was really interesting, however, was that the more variable a participants’ mind-wandering, the greater error-monitoring capacity (top left). This suggests that individuals who show more fluctuation between internally and externally oriented attention may be able to better enjoy the benefits of mind-wandering while simultaneously limiting its costs. Of course, these are only individual differences (i.e. correlations) and should be treated as highly preliminary. It is possible for example that participants who use more of the TUT scale have higher meta-cognitive ability in general, rather than the two variables being causally linked in the way we suggest.  We are careful to raise these and other limitations in the paper, but I do think this finding is a nice first step.

To ‘probe’ a bit further we looked at the BOLD responses to correct stops, and the parametric correlation of task-related BOLD with the TUT ratings:

Activations during correct stop trials.
Activations during correct stop trials.
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)

As you can see, correct stop trials elicit a rather canonical activation pattern on the motor-inhibition and salience networks, with concurrent deactivations in visual cortex and the default mode network (second figure, blue blobs). I think of this pattern a bit like when the brain receives the ‘stop signal’ it goes, (a la Picard): “FULL STOP, MAIN VIEWER OFF, FIRE THE PHOTON TORPEDOS!”, launching into full response recovery mode. Interestingly, while we replicated the finding of medial-prefrontal co-variation with TUTS (second figure, red blob), this area was substantially more rostral than the stop-related deactivations, supporting previous findings of some degree of functional segregation between the inhibitory and mind-wandering related components of the DMN.

Finally, when examining the Aware > Unaware errors contrast, we replicated the typical salience network activations (mid-cingulate and anterior insula). Interestingly we also found strong bilateral activations in an area of the inferior parietal cortex also considered to be a part of the default mode. This finding further strengthens the link between mind-wandering and metacognition, indicating that the salience and default mode network may work in concert during conscious error awareness:

Activations to Aware > Unaware errors contrast.
Activations to Aware > Unaware errors contrast.

In all, this was a very valuable and fun study for me. As a PhD student being able to replicate the function of classic “executive, salience, and default mode” ‘resting state’ networks with a basic task was a great experience, helping me place some confidence in these labels.  I was also able to combine a classical behavioral metacognition task with some introspective thought probes, and show that they do indeed contain valuable information about task performance and related brain processes. Importantly though, we showed that the ‘content’ of the mind-wandering reports doesn’t tell the whole story of spontaneous cognition. In the future I would like to explore this idea further, perhaps by taking a time series approach to probe the dynamics of mind-wandering, using a simple continuous feedback device that participants could use throughout an experiment. In the affect literature such devices have been used to probe the dynamics of valence-arousal when participants view naturalistic movies, and I believe such an approach could reveal even greater granularity in how the experience of mind-wandering (and it’s fluctuation) interacts with cognition. Our findings suggest that the relationship between mind-wandering and task performance may be more nuanced than mere antagonism, an important finding I hope to explore in future research.

Citation: Allen M, Smallwood J, Christensen J, Gramm D, Rasmussen B, Jensen CG, Roepstorff A and Lutz A (2013) The balanced mind: the variability of task-unrelated thoughts predicts error monitoringFront. Hum. Neurosci7:743. doi: 10.3389/fnhum.2013.00743

Is the resting BOLD signal physiological noise? What about resting EEG?

Over the past 5 years, resting-state fMRI (rsfMRI) has exploded in popularity. Literally dozens of papers are published each day examining slow (< .1 hz) or “low frequency” fluctuations in the BOLD signal. When I first moved to Europe I was caught up in the somewhat North American frenzy of resting state networks. I couldn’t understand why my Danish colleagues, who specialize in modelling physiological noise in fMRI, simply did not take the literature seriously. The problem is essentially that the low frequencies examined in these studies are the same as those that dominate physiological rhythms. Respiration and cardiac pulsation can make up a massive amount of variability in the BOLD signal. Before resting state fMRI came along, nearly every fMRI study discarded any data frequencies lower than one oscillation every 120 seconds (e.g. 1/120 Hz high pass filtering). Simple things like breath holding and pulsatile motion in vasculature can cause huge effects in BOLD data, and it just so happens that these artifacts (which are non-neural in origin) tend to pool around some of our favorite “default” areas: medial prefrontal cortex, insula, and other large gyri near draining veins.

Naturally this leads us to ask if the “resting state networks” (RSNs) observed in such studies are actually neural in origin, or if they are simply the result of variations in breath pattern or the like. Obviously we can’t answer this question with fMRI alone. We can apply something like independent component analysis (ICA) and hope that it removes most of the noise- but we’ll never really be 100% sure we’ve gotten it all that way. We can measure the noise directly (e.g. “nuisance covariance regression”) and include it in our GLM- but much of the noise is likely to be highly correlated with the signal we want to observe. What we need are cross-modality validations that low-frequency oscillations do exist, that they drive observed BOLD fluctuations, and that these relationships hold even when controlling for non-neural signals. Some of this is already established- for example direct intracranial recordings do find slow oscillations in animal models. In MEG and EEG, it is well established that slow fluctuations exist and have a functional role.

So far so good. But what about in fMRI? Can we measure meaningful signal while controlling for these factors? This is currently a topic of intense research interest. Marcus Raichle, the ‘father’ of the default mode network, highlights fascinating multi-modal work from a Finnish group showing that slow fluctuations in behavior and EEG signal coincide (Raichle and Snyder 2007; Monto, Palva et al. 2008). However, we should still be cautious- I recently spoke to a post-doc from the Helsinki group about the original paper, and he stressed that slow EEG is just as contaminated by physiological artifacts as fMRI. Except that the problem is even worse, because in EEG the artifacts may be several orders of magnitude larger than the signal of interest[i].

Understandably I was interested to see a paper entitled “Correlated slow fluctuations in respiration, EEG, and BOLD fMRI” appear in Neuroimage today (Yuan, Zotev et al. 2013). The authors simultaneously collected EEG, respiration, pulse, and resting fMRI data in 9 subjects, and then perform cross-correlation and GLM analyses on the relationship of these variables, during both eyes closed and eyes open rest. They calculate Respiratory Volume per Time (RVT), a measure developed by Rasmus Birn, to assign a respiratory phase to each TR (Birn, Diamond et al. 2006). One key finding is that the global variations in EEG power are strongly predicted by RVT during eyes closed rest, with a maximum peak correlation coefficient of .40. Here are the two time series:

RVTalpha 

You can clearly see that there is a strong relationship between global alpha (GFP) and respiration (RVT). The authors state that “GFP appears to lead RVT” though I am not so sure. Regardless, there is a clear relationship between eyes closed ‘alpha’ and respiration. Interestingly they find that correlations between RVT and GFP with eyes open were not significantly different from chance, and that pulse did not correlate with GFP. They then conduct GLM analyses with RVT and GFP as BOLD regressors. Here is what their example subject looked like during eyes-closed rest:

RVT_GFP_BOLD

Notice any familiar “RSNs” in the RVT map? I see anti-correlated executive deactivation and default mode activation! Very canonical.  Too bad they are breath related. This is why noise regression experts tend to dislike rsfMRI, particularly when you don’t measure the noise. We also shouldn’t be too surprised that the GFP-BOLD and RVT-BOLD maps look similar, considering that GFP and RVT are highly correlated. After looking at these correlations separately, Yuan et al perform RETROICOR physiological noise correction and then reexamine the contrasts. Here are the group maps:

group_map

Things look a bit less default-mode-like in the group RVT map, but the RVT and GFP maps are still clearly quite similar. In panel D you can see that physiological noise correction has a large global impact on GFP-BOLD correlations, suggesting that quite a bit of this co-variance is driven by physiological noise. Put simply, respiration is explaining a large degree of alpha-BOLD correlation; any experiment not modelling this covariance is likely to produce strongly contaminated results. Yuan et al go on to examine eyes-open rest and show that, similar to their RVT-GFP cross-correlation analysis, not nearly as much seems to be happening in eyes open compared to closed:

eyesopen

The authors conclude that “In particular, this correlation between alpha EEG and respiration is much stronger in eyes-closed resting than in eyes-open resting” and that “[the] results also suggest that eyes-open resting may be a more favorable condition to conduct brain resting state fMRI and for functional connectivity analysis because of the suppressed correlation between low-frequency respiratory fluctuation and global alpha EEG power, therefore the low-frequency physiological noise predominantly of non-neuronal origin can be more safely removed.” Fair enough- one conclusion is certainly that eyes closed rest seems much more correlated with respiration than eyes open. This is a decent and useful result of the study. But then they go on to make this really strange statement, which appears in the abstract, introduction, and discussion:

“In addition, similar spatial patterns were observed between the correlation maps of BOLD with global alpha EEG power and respiration. Removal of respiration related physiological noise in the BOLD signal reduces the correlation between alpha EEG power and spontaneous BOLD signals measured at eyes-closed resting. These results suggest a mutual link of neuronal origin between the alpha EEG power, respiration, and BOLD signals”’ (emphasis added)

That’s one way to put it! The logic here is that since alpha = neural activity, and respiration correlates with alpha, then alpha must be the neural correlate of respiration. I’m sorry guys, you did a decent experiment, but I’m afraid you’ve gotten this one wrong. There is absolutely nothing that implies alpha power cannot also be contaminated by respiration-related physiological noise. In fact it is exactly the opposite- in the low frequencies observed by Yuan et al the EEG data is particularly likely to be contaminated by physiological artifacts! And that is precisely what the paper shows – in the author’s own words: “impressively strong correlations between global alpha and respiration”. This is further corroborated by the strong similarity between the RVT-BOLD and alpha-BOLD maps, and the fact that removing respiratory and pulse variance drastically alters the alpha-BOLD correlations!

So what should we take away from this study? It is of course inconclusive- there are several aspects of the methodology that are puzzling to me, and sadly the study is rather under-powered at n = 9. I found it quite curious that in each of the BOLD-alpha maps there seemed to be a significant artifact in the lateral and posterior ventricles, even after physiological noise correction (check out figure 2b, an almost perfect ventricle map). If their global alpha signal is specific to a neural origin, why does this artifact remain even after physiological noise correction? I can’t quite put my finger on it, but it seems likely to me that some source of noise remained even after correction- perhaps a reader with more experience in EEG-fMRI methods can comment. For one thing their EEG motion correction seems a bit suspect, as they simply drop outlier timepoints. One way or another, I believe we should take one clear message away from this study – low frequency signals are not easily untangled from physiological noise, even in electrophysiology. This isn’t a damnation of all resting state research- rather it is a clear sign that we need be to measuring these signals to retain a degree of control over our data, particularly when we have the least control at all.

References:

Birn, R. M., J. B. Diamond, et al. (2006). “Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.” Neuroimage 31(4): 1536-1548.

Monto, S., S. Palva, et al. (2008). “Very slow EEG fluctuations predict the dynamics of stimulus detection and oscillation amplitudes in humans.” The Journal of Neuroscience 28(33): 8268-8272.

Raichle, M. E. and A. Z. Snyder (2007). “A default mode of brain function: a brief history of an evolving idea.” Neuroimage 37(4): 1083-1090.

Yuan, H., V. Zotev, et al. (2013). “Correlated Slow Fluctuations in Respiration, EEG, and BOLD fMRI.” NeuroImage pp. 1053-8119.

 


[i] Note that this is not meant to be in anyway a comprehensive review. A quick literature search suggests that there are quite a few recent papers on resting BOLD EEG. I recall a well done paper by a group at the Max Planck Institute that did include noise regressors, and found unique slow BOLD-EEG relations. I cannot seem to find it at the moment however!

 

Insula and Anterior Cingulate: the ‘everything’ network or systemic neurovascular confound?

It’s no secret in cognitive neuroscience that some brain regions garner more attention than others. Particularly in fMRI research, we’re all too familiar with certain regions that seem to pop up in study after study, regardless of experimental paradigm. When it comes to areas like the anterior cingulate cortex (ACC) and insula (AIC), the trend is obvious. Generally when I see the same brain region involved in a wide a variety of tasks, I think there must be some very general level function which encompasses these paradigms. Off the top of my head, the ACC and AIC are major players in cognitive control, pain, emotion, consciousness, salience, working memory, decision making, and interoception to name a few. Maybe on a bad day I’ll look at a list like that and think, well localization is just all wrong, and really what we have is a big fat prefrontal cortex doing everything in conjunction. A paper published yesterday in Cerebral Cortex took my breath away and lead to a third, more sinister option: a serious methodological confound in a large majority of published fMRI papers.

Neurovascular coupling and the BOLD signal: a match not made in heaven

An important line of research in neuroimaging focuses on noise in fMRI signals. The essential problem of fMRI is that, while it provides decent spatial resolution, the data is acquired slowly and indirectly via the blood-oxygenation level dependent (BOLD) signal. The BOLD signal is messy, slow, and extremely complex in its origins. Although we typically assume increasing BOLD signal equals greater neural activity, the details of just what kind of activity (e.g. excitatory vs inhibitory, post-synaptic vs local field) are murky at best. Advancements in multi-modal and optogenetic imaging hold a great deal of promise regarding the signal’s true nature, but sadly we are currently at a “best guess” level of understanding. This weakness means that without careful experimental design, it can be difficult to rule out non-neural contributors to our fMRI signal. Setting aside the worry about what neural activity IS measured by BOLD signal, there is still the very real threat of non-neural sources like respiration and cardiovascular function confounding the final result. This is a whole field of research in itself, and is far too complex to summarize here in its entirety. The basic issue is quite simple though.

End-tidal C02, respiration, and the BOLD Signal

In a nutshell, the BOLD signal is thought to measure downstream changes in cerebral blood-flow (CBF) in response to neural activity. This relationship, between neural firing and blood flow, is called neurovascular coupling and is extremely complex, involving astrocytes and multiple chemical pathways. Additionally, it’s quite slow: typically one observes a 3-5 second delay between stimulation and BOLD response. This creates our first noise-related issue; the time between each ‘slice’ of the brain, or repetition time (TR), must be optimized to detect signals at this frequency. This means we sample from our participant’s brain slowly. Typically we sample every 3-5 seconds and construct our paradigms in ways that respect the natural time lag of the BOLD signal. Stimulate too fast, and the vasculature doesn’t have time to respond. Stimulation frequency also helps prevent our first simple confound: our pulse and respiration rates tend oscillate at slightly slower frequencies (approximately every 10-15 seconds). This is a good thing, and it means that so long as your design is well controlled (i.e. your events are properly staggered and your baseline is well defined) you shouldn’t have to worry too much about confounds. But that’s our first problematic assumption; consider for example when one’s paradigms use long blocks of obscure things like “decide how much you identify with these stimuli”. If cognitive load differs between conditions, or your groups (for example, a PTSD and a control group) react differently to the stimuli, respiration and pulse rates might easily begin to overlap your sampling frequency, confounding the BOLD signal.

But you say, my experiment is well controlled, and there’s no way my groups are breathing THAT differently! Fair enough, but this leads us to our next problem: end tidal CO2. Without getting into the complex physiology, end-tidal CO2 is a by-product of respiration. When you hold your breath, CO2 blood levels rise dramatically. CO2 is a potent vasodilator, meaning it opens blood vessels and increases local blood flow. You’ve probably guessed where I’m going with this: hold your breath in the fMRI and you get massive alterations in the BOLD signal. Your participants don’t even need to match the sampling frequency of the paradigm to confound the BOLD; they simply need to breath at slightly different rates in each group or condition and suddenly your results are full of CO2 driven false positives! This is a serious problem for any kind of unconstrained experimental design, especially those involving poorly conceptualized social tasks or long periods of free activity. Imagine now that certain regions of the brain might respond differently to levels of CO2.

This image is from Change & Glover’s paper, “Relationship between respiration, end-tidal CO2, and BOLD signals in resting-state fMRI”. Here they measure both CO2 and respiration frequency during a standard resting-state scan. The image displays the results of group-level regression of these signals with BOLD. I’ve added circles in blue around the areas that respond the strongest. Without consulting an atlas, we can clearly see that bilateral anterior insula extending upwards into parietal cortex, anterior cingulate, and medial-prefrontal regions are hugely susceptible to respiration and CO2. This is pretty damning for resting-state fMRI, and makes sense given that resting state fluctuations occur at roughly the same rate as respiration. But what about well-controlled event related designs? Might variability in neurovascular coupling cause a similar pattern of response? Here is where Di et al’s paper lends a somewhat terrifying result:


Di et al recently investigated the role of vascular confounds in fMRI by administrating a common digit-symbol substitution task (DSST), a resting state, and a breath-holding paradigm. Signals related to resting-state and breath-holding were then extracted and entered into multiple regression with the DSST-related activations. This allowed Di et al to estimate what brain regions were most influenced by low-frequency fluctuation (ALFF, a common resting state measure) and purely vascular sources (breath-holding). From the figure above, you can see that regions marked with the blue arrow were the most suppressed, meaning the signal explained by the event-related model was significantly correlated with the covariates, and in red where the signal was significantly improved by removal of the covariates. The authors conclude that “(results) indicated that the adjustment tended to suppress activation in regions that were near vessels such as midline cingulate gyrus, bilateral anterior insula, and posterior cerebellum.” It seems that indeed, our old friends the anterior insula and cingulate cortex are extremely susceptible to neurovascular confound.

What does this mean for cognitive neuroscience? For one, it should be clear that even well-controlled fMRI designs can exhibit such confounds. This doesn’t mean we should throw the baby out with the bathwater though; some designs are better than others. Thankfully it’s pretty easy to measure respiration with most scanners, and so it is probably a good idea at minimum to check if one’s experimental conditions do indeed create differential respiration patterns. Further, we need to be especially cautious in cases like meditation or clinical fMRI, where special participant groups may have different baseline respiration rates or stronger parasympathetic responses to stimuli. Sadly, I’m afraid that looking back, these findings greatly limit our conclusions in any design that did not control for these issues. Remember that the insula and ACC are currently cognitive neuroscience’s hottest regions. I’m not even going to get into resting state, where these problems are all magnified 10 fold. I’ll leave you with this image from neuroskeptic, estimating the year’s most popular brain regions:

Are those spikes publication fads, every-task regions, or neurovascular artifacts? You be the judge.

 
edit:As many of you had questions or comments regarding the best way to deal with respiratory related issues, I spoke briefly with resident noise expert Torben Lund at yesterday’s lab meeting. Removal of respiratory noise is fairly simple, but the real problem is with end-tidal C02. According to Torben, most noise experts agree that regression techniques only partially remove the artifact, and that an unknown amount is left behind even following signal regression. This may be due to slow vascular saturation effects that build up and remain irrespective of shear breath frequency. A very tricky problem indeed, and certainly worth researching.
 
 
Note: credit goes to my methods teacher and fMRI noise expert Torben Lund, and CFIN neurobiologist Rasmus Aamand, for introducing and explaining the basis of the C02/respiration issue to me. Rasmus particularly, whose sharp comments lead to my including respiration and pulse measures in my last meditation project.

Neuroscientists: What’s the most interesting question right now?

After 20 years of cognitive neuroscience, I sometimes feel frustrated by how little progress we’ve made. We still struggle with basic issues, like how to ask a subject if he’s in pain, or what exactly our multi-million dollar scanners measure. We lack a unifying theory linking information, psychological function, and neuroscientific measurement. We still publish all kinds of voodoo correlations, uncorrected p-values, and poorly operationalized blobfests. Yet, we’ve also laid some of the most important foundational research of our time. In twenty years we’ve mapped a mind boggling array of cognitive function. Some of these attempts at localization may not hold; others may be built on shaky functional definitions or downright poor methods. Even in the face of this uncertainty, the shear number and variety of functions that have been mapped is inspiring. Further, we’ve developed analytic tools to pave the way for an exciting new decade of multi-modal and connectomic research. Developments like resting-state fMRI, optogenetics, real time fMRI, and multi-modal imaging, make for a very exciting time to be a Cognitive Neuroscientist!

Online, things can seem a bit more pessimistic. Snarky methods blogs dedicated to revealing the worst in field tend to do well, and nearly any social-media savy neurogeek will lament the depressing state of science journalism and the brain. While I am also tired of incessantly phrenological, blob-obsessed reports (“research finds god spot in the brain, are your children safe??”) I think we share some of the blame for not communicating properly about what interests and challenges us. For me, some of the most exciting areas of research are those concerning getting straight about what our measurements mean- see the debates over noise in resting state or the neural underpinnings of the BOLD signal for example. Yet these issues are often reported as dry methodological reports, the writers themselves seemingly bored with the topic.

We need to do a better job illustrating to people just how complex and infantile our field is. The big, sexy issues are methodological in nature. They’re also phenomenological in nature. Right now neuroscience is struggling to define itself, unsure if we should be asking our subjects how they feel or anesthetizing them. I believe that if we can illustrate just how tenuous much of our research is, including the really nagging problems, the public will better appreciate seemingly nuanced issues like rest-stimulus interaction and noise-regression.

With that in mind- what are your most exciting questions, right now? What nagging thorn ails you at all steps in your research?

For me, the most interesting and nagging question is, what do people do when we ask them to do nothing? I’m talking about rest-stimulus interaction and mind wandering. There seem to be two prevailing (pro-resting state) views: that default mode network-related activity is related to subjective mind-wandering, and/or that it’s a form of global, integrative, stimulus independent neural variability. On the first view, variability in participants ability to remain on-task drive slow alterations in behavior and stimulus-evoked brain activity. On the other, innate and spontaneous rhythms synchronize large brain networks in ways that alter stimulus processing and enable memory formation. Either way, we’re left with the idea that a large portion of our supposedly well-controlled, stimulus-related brain activity is in fact predicted by uncontrolled intrinsic brain activity. Perhaps even defined by it! When you consider that all this is contingent on the intrinsic activity being real brain activity and not some kind of vascular or astrocyte-driven artifact, every research paradigm becomes a question of rest-stimulus interaction!

So neuroscientists, what keeps you up at night?

A brave new default mode in meditation practitioners- or just confused controls? Review of Brewer (2011)

Given that my own work focuses on cognitive control, intrinsic connectivity, and mental-training (e.g. meditation) I was pretty excited to see Brewer et al’s paper on just these topics appear in PNAS just in time for the winter holidays. I meant to review it straight away but have been buried under my own data analysis until recently. Sadly, when I finally got around to delving into it, my overall reaction was lukewarm at best. Without further ado, my review of:

“Meditation experience is associated with differences in default mode network activity and connectivity

Abstract:

“Many philosophical and contemplative traditions teach that “living in the moment” increases happiness. However, the default mode of humans appears to be that of mind-wandering, which correlates with unhappiness, and with activation in a network of brain areas associated with self-referential processing. We investigated brain activity in experienced meditators and matched meditation-naive controls as they performed several different meditations (Concentration, Loving-Kindness, Choiceless Awareness). We found that the main nodes of the default mode network(medial prefrontal and posterior cingulate cortices) were relatively deactivated in experienced meditators across all meditation types. Furthermore, functional connectivity analysis revealed stronger coupling in experienced meditators between the posterior cingulate, dorsal anterior cingulate, and dorsolateral prefrontal cortices (regions previously implicated in self- monitoring and cognitive control), both at baseline and during meditation. Our findings demonstrate differences in the default-mode network that are consistent with decreased mind-wandering. As such, these provide a unique understanding of possible neural mechanisms of meditation.”

Summary:

Aims: 9/10

Methods: 5/10

Interpretation: 7/10

Importance/Generalizability: 4/10

Overall: 6.25/10

The good: simple, clear cut design, low amount of voodoo, relatively sensible findings

The bad: lack of behavioral co-variates to explain neural data, yet another cross-sectional design

The ugly: prominent reporting of uncorrected findings, comparison of meditation-naive controls to practitioners using meditation instructions (failure to control task demands).

Take-home: Some interesting conclusions, from a somewhat tired and inconclusive design. Poor construction of baseline condition leads to a shot-gun spattering of brain regions with a few that seem interesting given prior work. Let’s move beyond poorly controlled cross-sections and start unravelling the core mechanisms (if any) involved in mindfulness.

Extended Review:
Although this paper used typical GLM and functional connectivity analyses, it loses points in several areas. First, although the authors repeatedly suggest that their “relative paucity of findings” may be “driven by the sensitivity of GLM analysis to fluctuations at baseline… and since meditation practitioners may be (meditating) at baseline…” the contrast would be weak. However, I will side with Jensen et al (2011) here in saying: Meditation naive controls receiving less than 5 minutes of instruction in “focused attention, loving-kindness and choiceless awareness” are simply no controls at all. The argument that the inability of the GLM to detect differences that are quite obviously confounded by a lack of an appropriately controlled baseline is galling at best. This is why we use a GLM-approach; it’s senseless to make conclusions about brain activity when your baseline is no baseline at all. Telling meditation-naive controls to utilize esoteric cultural practices of which they have only just been introduced too, and then comparing that to highly experienced practitioners is a perfect storm of cognitive confusion and poorly controlled demand characteristic. Further, I am disappointed in the review process that allowed the following statement “We found a similar pattern in the medial prefrontal cortex (mPFC), another primary node of the DMN, although it did not survive whole-brain correction for signifigance” followed by this image:

image

These results are then referred to repeatedly in the following discussion. I’m sorry, but when did uncorrected findings suddenly become interpretable? I blame the reviewers here over the authors- they should have known better. The MPFC did not survive correction and hence should not be included in anything other than a explicitly stated as such “exploratory analysis”. In fact it’s totally unclear from the methods section of this paper how these findings where at all discovered: did the authors first examine the uncorrected maps and then re-analyze them using the FWE correction? Or did they reduce their threshold in an exploratory post-hoc fashion? These things make a difference and I’m appalled that the reviewers let the article go to print as it is, when figure 1 and the discussion clearly give the non-fMRI savy reader the impression that a main finding of this study is MPFC activation during meditation. Can we please all agree to stop reporting uncorrected p-values?

I will give the authors this much; the descriptions of practice, and the theoretical guideposts are all quite coherent and well put-together. I found their discussion of possible mechanisms of DMN alteration in meditation to be intriguing, even if I do not agree with their conclusion. Still, it pains me to see a paper with so much potential fail to address the pitfalls in meditation research that should now be well known. Indeed the authors themselves make much ado about how difficult proper controls are, yet seem somehow oblivious to the poorly controlled design they here report. This leads me to my own reinterpretation of their data.

A new default mode, or confused controls?

Brewer et al (2011) report that, when using a verbally guided meditation instruction with meditation naive-controls and experienced practitioners, greater activations in PCC, temporal regions, and for loving-kindness, amygdala are found. Given strong evidence by colleagues Christian Jensen et al (2011) that these kinds of contrasts better represent differences in attentional effort than any mechanism inherent to meditation, I can’t help but wonder if what were seeing here is simply some controls trying to follow esoteric instructions and getting confused in the process. Consider the instruction for the choiceless awareness condition:

“Please pay attention to whatever comes into your awareness, whether it is a thought, emotion, or body sensation. Just follow it until something else comes into your awareness, not trying to hold onto it or change it in any way. When something else comes into your awareness, just pay attention to it until the next thing comes along”

Given that in most contemplative traditions, choiceless awareness techniques are typically late-level advanced practices, in which the very concept of grasping to a stimulus is distinctly altered and laden with an often spiritual meaning, it seems obvious to me that such an instruction constitutes and excellent mindwandering inducement for naive-controls. Do you meditate? I do a little, and yet I find these instructions extremely difficult to follow without essentially sending my mind in a thousand directions. Am I doing this correctly?  When should I shift? Is this a thought or am I just feeling hungry? These things constitute mind-wandering but for the controls, I would argue they constitute following the instructions. The point is that you simply can’t make meaningful conclusions about the neural mechanisms involved in mindfulness from these kinds of instructions.

Finally, let’s examine the functional-connectivity analysis. To be honest, there isn’t a whole lot to report here; the functional connectivity during meditation is perhaps confounded by the same issues I list above, which seems to me a probable cause for the diverse spread of regions reported between controls and meditators. I did find this bit to be interesting:

“Using the mPFC as the seed region, we found increased connectivity with the fusiform gyrus, inferior temporal and parahippocampal gyri, and left posterior insula (among other regions) in meditators relative to controls during meditation (Fig. 3, Fig. S1H, and Table S3). A subset of those regions showed the same relatively increased connectivity in meditators during the baseline period as well (Fig. S1G and Table1)

I found it interesting that the meditation conditions appear to co-activate MPFC and insula, and would love to see this finding replicated in properly controlled design. I also have a nagging wonder as to why the authors didn’t bother to conduct a second-level covariance analysis of their findings with the self-reported mind-wandering scores. If these findings accurately reflect meditation-induced alterations in the DMN, or as the authors more brazenly suggest “a entirely new default network”, wouldn’t we expect their PCC modulations to be predicted by individual variability in mind-wandering self-reports? Of course, we could open the whole can of worms that is “what does it mean when you ask participants if they ‘experienced mind wandering” but I’ll leave that for a future review. At least the authors throw a bone to neurophenomenology, suggesting in the discussion that future work utilize first-person methodology. Indeed.

Last, it occurs to me that the primary finding, of increased DLPFC and ACC in meditation>Controls, also fits well with my intepretation that this design is confounded by demand characteristics. If you take a naive subject and put them in the scanner with these instructions, I’ve argued that their probably going to do something a whole lot like mind-wandering. On the other hand, an experienced practitioner has a whole lot of implicit pressure on them to live up to their tradition. They know what they are their for, and hence they know that they should be doing their thing with as much effort as possible. So what does the contrast meditation>naive really give us? It gives us mind-wandering in the naive group, and increased attentional effort in the practitioner group. We can’t conclude anything from this design regarding mechanisms intrinsic to mindfulness; I predict that if you constructed a similar setting with any kind of dedicated specialist, and gave instructions like “think about your profession, what it means to you, remember a time you did really well” you would see the exact same kind of results. You just can’t compare the uncomparable.

Disclaimer: as usual, I review in the name of science, and thank the authors whole-heartily for the great effort and attention to detail that goes into these projects.  Also it’s worth mentioning that my own research focuses on many of these exact issues in mental training research, and hence i’m probably a bit biased in what I view as important issues.

Intrinsic correlations between Salience, Primary Sensory, and Default Mode Networks following MBSR

Going through my RSS backlog today, I was excited to see Kilpatrick et al.’s “Impact of Mindfulness-Based Stress Reduction Training on Intrinsic Brain Connectivity” appear in this week’s early view Neuroimage. Although I try to keep my own work focused on primary research in cognition and connectivity, mindfulness-training (MT) is a central part of my research. Additionally, there are few published findings on intrinsic connectivity in this area. Previous research has mainly focused on between-group differences in anatomical structure (gray and white matter for example) and task-related activity. A few more recent studies have gone as far as to randomize participants into wait-listed control and MT groups.

While these studies are interesting, they are of course limited in their scope by several factors. My supervisor Antoine Lutz emphasizes that in addition to our active-controlled research here in Århus, his group at Wisconsin-Madison and others are actively preparing such datasets. Active controls are simply ‘mock’ interventions (or real ones) designed to control for every possible aspect of being involved in an intervention (placebo, community, motivation) in order to isolate the variables specific to that treatment (in this case, meditation, but not sitting, breathing, or feeling special).  Active controls are important as there is a great deal of research demonstrating that cognition itself is susceptible to placebo-like motivational effects. All and all, I’ve seen several active-controlled, cognitive-behavioral studies in review that suggest we should be strongly skeptical of any non-active controlled findings. While I can’t discuss these in detail, I will mention some of these issues in my review of the neuroimage manuscript. It suffices to say however, that if you are working on a passive-controlled study in this area, you had better get it out fast as you can expect reviewers to be greatly tightening their expectations in the coming months, as more and more rigorous papers appear. As Sara Lazar put it during my visit to her lab last summer “the low-hanging fruit of MBSR brain research are rapidly vanishing”. Overall this is a good thing for the community and we’ll see why in a moment.

Now let us turn to the paper at hand. Kilpatrick et al start with a standard summary of MBSR and rsfMRI research, focusing on findings indicating MBSR trains focused attention, sensory introspection/interception and perception. They briefly review now well-established findings indicating that rsfMRI is sensitive to training related changes, including studies that demonstrate the sensitivity of the resting state to conditions such as fatigue, eyes-open vs eyes-closed, and recent sleep. This is all pretty well and good, but I think it’s a bit odd when we see just how they collect their data.

Briefly, they recruited 32 healthy adults for randomization to MBSR and waitlist controls. Controls then complete the Mindfulness Attention Awareness Scale (MAAS) and receive 8 weeks of diary-logged standard MBSR training. After training, participants are recalled for the rsfMRI scan. An important detail here is that participants are not scanned before and after training, rendering the fMRI portion of the experiment closer to a cross-section than true longitudinal design. At the time of scan, the researchers then give two ‘task-free states’, with and without auditory white noise. The authors indicate that the noise condition is included “to enable new analysis methods not conducted here”, presumably to average out scanner-noise related affects. They later indicate no differences between the two conditions, which causes me to ask how much here is meditation vs focusing-on-scanner-noise specific. Finally, they administer the ‘task free’ states with a slight twist:

“”During this baseline scan of about 5 min, we would like you to again stay as still as possible and be mindfully aware of your surroundings. Please keep your eyes closed during this procedure. Continue to be mindfully aware of whatever you notice in your surroundings and your own sensations. Mindful awareness means that you pay attention to your present moment experience, in this case the changing sounds of the scanner/changing background sounds played through the headphones, and to bring interest and curiosity to how you are responding to them.”

While the manipulation makes sense given the experimenter’s hypothesis concerning sensory processing, an ongoing controversy in resting-state research is just what it is that constitutes ‘rest’. Research here suggests that functional connectivity is sensitive to task-instructions and variations in visual stimulation, and many complain about the lack of specificity within different rest conditions. Kilpatrick et al’s manipulation makes sense given that what they really want to see is meditation-related alterations, but it’s a dangerous leap without first establishing the relationship between ‘true rest’ and their ‘auditory meditation’ condition. Research on the impact of scanner-noise indicates some degree of noise-related nuisance effects, and also some functionally significant effects. If you’ve never been in an MR experiment, the scanner is LOUD. During my first scan I actually started feeling claustrophobic due to the oppressive machine-gun like noise of the gradient coil. Anyway, it’s really troubling that Kilpatrick et al don’t include a totally task-free set for comparison, and I’m hesitant to call this a resting-state finding without further clarification.

The study is extremely interesting, but it’s important to note it’s limitations:

  1. Lack of active control- groups are not controlled for motivation.
  2. No pre/post scan.
  3. Novel resting state without comparison condition.
  4. Findings are discussed as ‘training related’ without report of correlation with reported practice hours.
  5. Anti-correlations reported with global-signal nuisance regression. No discussion of possible regression related inducement (see edit).
  6. Discussion of findings is unclear; reported as greater DMN x Auditory correlation, but the independent component includes large portions of the salience network.

Ultimately they identify a “auditory/salience” independent component network (ICN) (primary auditory, STG, posterior Insula, ACC, and lateral frontal cortex) and then conduct seed-regression analyses of the network with areas of the DMN and Dorsal Attention Network (DAN). I find it highly strange that they pick up a network that seems to conflate primary sensory and salience regions, as do the researchers who state “Therefore, the ICN was labeled as “auditory/salience”. It is unclear why the components split differently in our sample, perhaps the instructions that brought attention to auditory input altered the covariance structure somewhat.” Given the lack of motivational control in the study, the issues in this study begin to pile onto one another and I am not sure what we can really conclude. They further find that the MBSR group demonstrates greater “auditory/salience x DMN connectivity”, “greater visual and auditory functional connectivity” (see image below). They also report several increased anti-correlations, between the aud/sal network, dMPFC and visual regions. I find this to be an extremely tantalizing finding as it would reflect a decrease in processing automaticity amongst the SAL, CEN, and DMN networks. There are some serious problems with these kinds of analysis that the authors don’t address, and so we again must reserve any strong conclusions. Here is what Kilpatrick et al conclude:

“The current findings extend the results of prior studies that showed meditation-related changes in specific brain regions active during attention and sensory processing by providing evidence that MBSR trained compared to untrained subjects, during a focused attention instruction, have increased connectivity within sensory networks and between regions associated with attentional processes and those in the attended sensory cortex. In addition they show greater differentiation between regions associated with attentional processes and the unattended sensory cortex as well as greater differentiation between attended and unattended sensory networks”

As is typical, the list of findings is quite long and I won’t bother re-stating it all here. Given the resting instructions it seems clear that the freshly post-MBSR participants are likely to have engaged a pretty dedicated set of cognitive operations during the scan. Yet it’s totally unclear what the control group would do given these contemplative instructions. Presumably they’d just lie in the scanner and try not to tune out the noise- but you can see here how it’s not clear that these conditions are really that comparable without having some idea of what’s going on. In essence what you (might) have here is one group actually doing something (meditation) and the other group not doing much at all. Ideally you want to see how training impacts the underlying process in a comparable way. Motivation has been repeatedly linked to BOLD signal intensity and in this case, it could very well be that these findings are simple artifacts of motivation to perform. If one group is actually practicing mindfulness and the other isn’t, you have not isolated the variable of interest. The authors could have somewhat alleviated this by including data from the additional pain task (“not reported here”) and/or at least giving us a correlation of the findings with the MAAS scale. I emphasize that I do find the findings of this paper interesting- they map extremely well onto my own hypotheses about how RSNs interact with mindfulness training, but that we must interpret them with caution.

Overall I think this was a project with a strong theoretical motivation and some very interesting ideas. One problem with looking at state-mindfulness in the scanner is the cramped, noisy environment. I think Kilpatrick et al had a great idea in their attempt to use the noise itself as a manipulation. Further, the findings make a good deal of sense. Still, given the above limitation, it’s important to be really careful with our conclusions. At best, this study warrants an extremely rigorous follow-up, and I wish neuroimage had published it with a bit more information, such as the status of any rest-MAAS correlations. Anyway, this post has gotten quite long and I think I’d best get back to work- for my next post I think I’ll go into more detail about some of the issues confront resting state (what is “rest”?) and mindfulness (role of active controls for community, motivation, and placebo effects) and what they mean for resting-state research.

edit: just realized I never explained limitation #5. See my “beautiful noise” slides (previous post) regarding the controversy of global signal regression and anti-correlation. Simply put, there is somewhat convincing evidence that this procedure (designed to eliminate low-frequency nuisance co-variates) may actually mathematically induce anti-correlations where none exist, probably due to regression to the mean. While it’s not a slam-dunk (see response by Fox et al), it’s an extremely controversial area and all anti-correlative findings should be interpreted in light of this possibility.

If you like this post please let me know in the comments! If I can get away with rambling about this kind of stuff, I’ll do so more frequently.