Mind-wandering and metacognition: variation between internal and external thought predicts improved error awareness

Yesterday I published my first paper on mind-wandering and metacognition, with Jonny Smallwood, Antoine Lutz, and collaborators. This was a fun project for me as I spent much of my PhD exhaustively reading the literature on mind-wandering and default mode activity, resulting in a lot of intense debate a my research center. When we had Jonny over as an opponent at my PhD defense, the chance to collaborate was simply too good to pass up. Mind-wandering is super interesting precisely because we do it so often. One of my favourite anecdotes comes from around the time I was arguing heavily for the role of the default mode in spontaneous cognition to some very skeptical colleagues.  The next day while waiting to cross the street, one such colleague rode up next to me on his bicycle and joked, “are you thinking about the default mode?” And indeed I was – meta-mind-wandering!

One thing that has really bothered me about much of the mind-wandering literature is how frequently it is presented as attention = good, mind-wandering = bad. Can you imagine how unpleasant it would be if we never mind-wandered? Just picture trying to solve a difficult task while being totally 100% focused. This kind of hyper-locking attention can easily become pathological, preventing us from altering course when our behaviour goes awry or when something internal needs to be adjusted. Mind-wandering serves many positive purposes, from stimulating our imaginations, to motivating us in boring situations with internal rewards (boring task… “ahhhh remember that nice mojito you had on the beach last year?”). Yet we largely see papers exploring the costs – mood deficits, cognitive control failure, and so on. In the meditation literature this has even been taken up to form the misguided idea that meditation should reduce or eliminate mind-wandering (even though there is almost zero evidence to this effect…)

Sometimes our theories end up reflecting our methodological apparatus, to the extent that they may not fully capture reality. I think this is part of what has happened with mind-wandering, which was originally defined in relation to difficult (and boring) attention tasks. Worse, mind-wandering is usually operationalized as a dichotomous state (“offtask” vs “ontask”) when a little introspection seems to strongly suggest it is much more of a fuzzy, dynamic transition between meta-cognitive and sensory processes. By studying mind-wandering just as the ‘amount’ (or mean) number of times you were “offtask”, we’re taking the stream of consciousness and acting as if the ‘depth’ at one point in the river is the entire story – but what about flow rate, tidal patterns, fishies, and all the dynamic variability that define the river? My idea was that one simple way get at this is by looking at the within-subject variability of mind-wandering, rather than just the overall mean “rate”.  In this way we could get some idea of the extent to which a person’s mind-wandering was fluctuating over time, rather than just categorising these events dichotomously.

The EAT task used in my study, with thought probes.
The EAT task used in my study, with thought probes.

To do this, we combined a classical meta-cognitive response inhibition paradigm, the “error awareness task” (pictured above), with standard interleaved “thought-probes” asking participants to rate on a scale of 1-7 the “subjective frequency” of task-unrelated thoughts in the task interval prior to the probe.  We then examined the relationship between the ability to perform the task or “stop accuracy” and each participant’s mean task-unrelated thought (TUT). Here we expected to replicate the well-established relationship between TUTs and attention decrements (after all, it’s difficult to inhibit your behaviour if you are thinking about the hunky babe you saw at the beach last year!). We further examined if the standard deviation of TUT (TUT variability) within each participant would predict error monitoring, reflecting a relationship between metacognition and increased fluctuation between internal and external cognition (after all, isn’t that kind of the point of metacognition?). Of course for specificity and completeness, we conducted each multiple regression analysis with the contra-variable as control predictors. Here is the key finding from the paper:

Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.
Regression analysis of TUT, TUT variability, stop accuracy, and error awareness.

As you can see in the bottom right, we clearly replicated the relationship of increased overall TUT predicting poorer stop performance. Individuals who report an overall high intensity/frequency of mind-wandering unsurprisingly commit more errors. What was really interesting, however, was that the more variable a participants’ mind-wandering, the greater error-monitoring capacity (top left). This suggests that individuals who show more fluctuation between internally and externally oriented attention may be able to better enjoy the benefits of mind-wandering while simultaneously limiting its costs. Of course, these are only individual differences (i.e. correlations) and should be treated as highly preliminary. It is possible for example that participants who use more of the TUT scale have higher meta-cognitive ability in general, rather than the two variables being causally linked in the way we suggest.  We are careful to raise these and other limitations in the paper, but I do think this finding is a nice first step.

To ‘probe’ a bit further we looked at the BOLD responses to correct stops, and the parametric correlation of task-related BOLD with the TUT ratings:

Activations during correct stop trials.
Activations during correct stop trials.
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)
Deactivations to stop trials (blue) and parametric correlation with TUT reports (red)

As you can see, correct stop trials elicit a rather canonical activation pattern on the motor-inhibition and salience networks, with concurrent deactivations in visual cortex and the default mode network (second figure, blue blobs). I think of this pattern a bit like when the brain receives the ‘stop signal’ it goes, (a la Picard): “FULL STOP, MAIN VIEWER OFF, FIRE THE PHOTON TORPEDOS!”, launching into full response recovery mode. Interestingly, while we replicated the finding of medial-prefrontal co-variation with TUTS (second figure, red blob), this area was substantially more rostral than the stop-related deactivations, supporting previous findings of some degree of functional segregation between the inhibitory and mind-wandering related components of the DMN.

Finally, when examining the Aware > Unaware errors contrast, we replicated the typical salience network activations (mid-cingulate and anterior insula). Interestingly we also found strong bilateral activations in an area of the inferior parietal cortex also considered to be a part of the default mode. This finding further strengthens the link between mind-wandering and metacognition, indicating that the salience and default mode network may work in concert during conscious error awareness:

Activations to Aware > Unaware errors contrast.
Activations to Aware > Unaware errors contrast.

In all, this was a very valuable and fun study for me. As a PhD student being able to replicate the function of classic “executive, salience, and default mode” ‘resting state’ networks with a basic task was a great experience, helping me place some confidence in these labels.  I was also able to combine a classical behavioral metacognition task with some introspective thought probes, and show that they do indeed contain valuable information about task performance and related brain processes. Importantly though, we showed that the ‘content’ of the mind-wandering reports doesn’t tell the whole story of spontaneous cognition. In the future I would like to explore this idea further, perhaps by taking a time series approach to probe the dynamics of mind-wandering, using a simple continuous feedback device that participants could use throughout an experiment. In the affect literature such devices have been used to probe the dynamics of valence-arousal when participants view naturalistic movies, and I believe such an approach could reveal even greater granularity in how the experience of mind-wandering (and it’s fluctuation) interacts with cognition. Our findings suggest that the relationship between mind-wandering and task performance may be more nuanced than mere antagonism, an important finding I hope to explore in future research.

Citation: Allen M, Smallwood J, Christensen J, Gramm D, Rasmussen B, Jensen CG, Roepstorff A and Lutz A (2013) The balanced mind: the variability of task-unrelated thoughts predicts error monitoringFront. Hum. Neurosci7:743. doi: 10.3389/fnhum.2013.00743

Intrinsic correlations between Salience, Primary Sensory, and Default Mode Networks following MBSR

Going through my RSS backlog today, I was excited to see Kilpatrick et al.’s “Impact of Mindfulness-Based Stress Reduction Training on Intrinsic Brain Connectivity” appear in this week’s early view Neuroimage. Although I try to keep my own work focused on primary research in cognition and connectivity, mindfulness-training (MT) is a central part of my research. Additionally, there are few published findings on intrinsic connectivity in this area. Previous research has mainly focused on between-group differences in anatomical structure (gray and white matter for example) and task-related activity. A few more recent studies have gone as far as to randomize participants into wait-listed control and MT groups.

While these studies are interesting, they are of course limited in their scope by several factors. My supervisor Antoine Lutz emphasizes that in addition to our active-controlled research here in Århus, his group at Wisconsin-Madison and others are actively preparing such datasets. Active controls are simply ‘mock’ interventions (or real ones) designed to control for every possible aspect of being involved in an intervention (placebo, community, motivation) in order to isolate the variables specific to that treatment (in this case, meditation, but not sitting, breathing, or feeling special).  Active controls are important as there is a great deal of research demonstrating that cognition itself is susceptible to placebo-like motivational effects. All and all, I’ve seen several active-controlled, cognitive-behavioral studies in review that suggest we should be strongly skeptical of any non-active controlled findings. While I can’t discuss these in detail, I will mention some of these issues in my review of the neuroimage manuscript. It suffices to say however, that if you are working on a passive-controlled study in this area, you had better get it out fast as you can expect reviewers to be greatly tightening their expectations in the coming months, as more and more rigorous papers appear. As Sara Lazar put it during my visit to her lab last summer “the low-hanging fruit of MBSR brain research are rapidly vanishing”. Overall this is a good thing for the community and we’ll see why in a moment.

Now let us turn to the paper at hand. Kilpatrick et al start with a standard summary of MBSR and rsfMRI research, focusing on findings indicating MBSR trains focused attention, sensory introspection/interception and perception. They briefly review now well-established findings indicating that rsfMRI is sensitive to training related changes, including studies that demonstrate the sensitivity of the resting state to conditions such as fatigue, eyes-open vs eyes-closed, and recent sleep. This is all pretty well and good, but I think it’s a bit odd when we see just how they collect their data.

Briefly, they recruited 32 healthy adults for randomization to MBSR and waitlist controls. Controls then complete the Mindfulness Attention Awareness Scale (MAAS) and receive 8 weeks of diary-logged standard MBSR training. After training, participants are recalled for the rsfMRI scan. An important detail here is that participants are not scanned before and after training, rendering the fMRI portion of the experiment closer to a cross-section than true longitudinal design. At the time of scan, the researchers then give two ‘task-free states’, with and without auditory white noise. The authors indicate that the noise condition is included “to enable new analysis methods not conducted here”, presumably to average out scanner-noise related affects. They later indicate no differences between the two conditions, which causes me to ask how much here is meditation vs focusing-on-scanner-noise specific. Finally, they administer the ‘task free’ states with a slight twist:

“”During this baseline scan of about 5 min, we would like you to again stay as still as possible and be mindfully aware of your surroundings. Please keep your eyes closed during this procedure. Continue to be mindfully aware of whatever you notice in your surroundings and your own sensations. Mindful awareness means that you pay attention to your present moment experience, in this case the changing sounds of the scanner/changing background sounds played through the headphones, and to bring interest and curiosity to how you are responding to them.”

While the manipulation makes sense given the experimenter’s hypothesis concerning sensory processing, an ongoing controversy in resting-state research is just what it is that constitutes ‘rest’. Research here suggests that functional connectivity is sensitive to task-instructions and variations in visual stimulation, and many complain about the lack of specificity within different rest conditions. Kilpatrick et al’s manipulation makes sense given that what they really want to see is meditation-related alterations, but it’s a dangerous leap without first establishing the relationship between ‘true rest’ and their ‘auditory meditation’ condition. Research on the impact of scanner-noise indicates some degree of noise-related nuisance effects, and also some functionally significant effects. If you’ve never been in an MR experiment, the scanner is LOUD. During my first scan I actually started feeling claustrophobic due to the oppressive machine-gun like noise of the gradient coil. Anyway, it’s really troubling that Kilpatrick et al don’t include a totally task-free set for comparison, and I’m hesitant to call this a resting-state finding without further clarification.

The study is extremely interesting, but it’s important to note it’s limitations:

  1. Lack of active control- groups are not controlled for motivation.
  2. No pre/post scan.
  3. Novel resting state without comparison condition.
  4. Findings are discussed as ‘training related’ without report of correlation with reported practice hours.
  5. Anti-correlations reported with global-signal nuisance regression. No discussion of possible regression related inducement (see edit).
  6. Discussion of findings is unclear; reported as greater DMN x Auditory correlation, but the independent component includes large portions of the salience network.

Ultimately they identify a “auditory/salience” independent component network (ICN) (primary auditory, STG, posterior Insula, ACC, and lateral frontal cortex) and then conduct seed-regression analyses of the network with areas of the DMN and Dorsal Attention Network (DAN). I find it highly strange that they pick up a network that seems to conflate primary sensory and salience regions, as do the researchers who state “Therefore, the ICN was labeled as “auditory/salience”. It is unclear why the components split differently in our sample, perhaps the instructions that brought attention to auditory input altered the covariance structure somewhat.” Given the lack of motivational control in the study, the issues in this study begin to pile onto one another and I am not sure what we can really conclude. They further find that the MBSR group demonstrates greater “auditory/salience x DMN connectivity”, “greater visual and auditory functional connectivity” (see image below). They also report several increased anti-correlations, between the aud/sal network, dMPFC and visual regions. I find this to be an extremely tantalizing finding as it would reflect a decrease in processing automaticity amongst the SAL, CEN, and DMN networks. There are some serious problems with these kinds of analysis that the authors don’t address, and so we again must reserve any strong conclusions. Here is what Kilpatrick et al conclude:

“The current findings extend the results of prior studies that showed meditation-related changes in specific brain regions active during attention and sensory processing by providing evidence that MBSR trained compared to untrained subjects, during a focused attention instruction, have increased connectivity within sensory networks and between regions associated with attentional processes and those in the attended sensory cortex. In addition they show greater differentiation between regions associated with attentional processes and the unattended sensory cortex as well as greater differentiation between attended and unattended sensory networks”

As is typical, the list of findings is quite long and I won’t bother re-stating it all here. Given the resting instructions it seems clear that the freshly post-MBSR participants are likely to have engaged a pretty dedicated set of cognitive operations during the scan. Yet it’s totally unclear what the control group would do given these contemplative instructions. Presumably they’d just lie in the scanner and try not to tune out the noise- but you can see here how it’s not clear that these conditions are really that comparable without having some idea of what’s going on. In essence what you (might) have here is one group actually doing something (meditation) and the other group not doing much at all. Ideally you want to see how training impacts the underlying process in a comparable way. Motivation has been repeatedly linked to BOLD signal intensity and in this case, it could very well be that these findings are simple artifacts of motivation to perform. If one group is actually practicing mindfulness and the other isn’t, you have not isolated the variable of interest. The authors could have somewhat alleviated this by including data from the additional pain task (“not reported here”) and/or at least giving us a correlation of the findings with the MAAS scale. I emphasize that I do find the findings of this paper interesting- they map extremely well onto my own hypotheses about how RSNs interact with mindfulness training, but that we must interpret them with caution.

Overall I think this was a project with a strong theoretical motivation and some very interesting ideas. One problem with looking at state-mindfulness in the scanner is the cramped, noisy environment. I think Kilpatrick et al had a great idea in their attempt to use the noise itself as a manipulation. Further, the findings make a good deal of sense. Still, given the above limitation, it’s important to be really careful with our conclusions. At best, this study warrants an extremely rigorous follow-up, and I wish neuroimage had published it with a bit more information, such as the status of any rest-MAAS correlations. Anyway, this post has gotten quite long and I think I’d best get back to work- for my next post I think I’ll go into more detail about some of the issues confront resting state (what is “rest”?) and mindfulness (role of active controls for community, motivation, and placebo effects) and what they mean for resting-state research.

edit: just realized I never explained limitation #5. See my “beautiful noise” slides (previous post) regarding the controversy of global signal regression and anti-correlation. Simply put, there is somewhat convincing evidence that this procedure (designed to eliminate low-frequency nuisance co-variates) may actually mathematically induce anti-correlations where none exist, probably due to regression to the mean. While it’s not a slam-dunk (see response by Fox et al), it’s an extremely controversial area and all anti-correlative findings should be interpreted in light of this possibility.

If you like this post please let me know in the comments! If I can get away with rambling about this kind of stuff, I’ll do so more frequently.