When is expectation not a confound? On the necessity of active controls.

Learning and plasticity are hot topics in neuroscience. Whether exploring old world wisdom or new age science fiction, the possibility that playing videogames might turn us into attention superheroes or that practicing esoteric meditation techniques might heal troubled minds is an exciting avenue for research. Indeed findings suggesting that exotic behaviors or novel therapeutic treatments might radically alter our brain (and behavior) are ripe for sensational science-fiction headlines purporting vast brain benefits.  For those of you not totally bored of methodological crisis, here we have one brewing anew. You see the standard recommendation for those interested in intervention research is the active-controlled experimental design. Unfortunately in both clinical research on psychotherapy (including meditation) and more Sci-Fi areas of brain training and gaming, use of active controls is rare at best when compared to the more convenient (but causally ineffective) passive control group. Now a new article in Perspectives in Psychological Science suggests that even standard active controls may not be sufficient to rule out confounds in the treatment effect of interest.

Why is that? And why exactly do we need  active controls in the first place? As the authors clearly point out, what you want to show with such a study is the causal efficacy of the treatment of interest. Quite simply what that means is that the thing you think should have some interesting effect should actually be causally responsible for creating that effect. If you want to argue that standing upside down for twenty minutes a day will make me better at playing videogames in Australia, it must be shown that it is actually standing upside down that causes my increased performance down under. If my improved performance on Minecraft Australian Edition is simply a product of my belief in the power of standing upside down, or my expectation that standing upside down is a great way to best kangaroo-creepers, then we have no way of determining what actually produced that performance benefit. Research on placebos and the power of expectations shows that these kinds of subjective beliefs can have a big impact on everything from attentional performance to mortality rates.

Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.
Useful flowchart from Boot et al on whether or not a study can make causal claims for treatment.

Typically researchers attempt to control for such confounds through the use of a control group performing a task as similar as possible to the intervention of interest. But how do we know participants in the two groups don’t end up with different expectations about how they should improve as a result of the training? Boot et al point out that without actually measuring these variables, we have no idea and no way of knowing for sure that expectation biases don’t produce our observed improvements. They then provide a rather clever demonstration of their concern, in an experiment where participants view videos of various cognition tests as well as videos of a training task they might later receive, in this case either the first-person shooter Unreal Tournament or the spatial puzzle game Tetris. Finally they asked the participants in each group which tests they thought they’d do better on as a result of the training video. Importantly the authors show that not only did UT and Tetris lead to significantly different expectations, but also that those expectation benefits were specific to the modality of trained and tested tasks. Thus participant who watched the action-intensive Unreal Tournament videos expected greater improvements on tests of reaction time and visual performance, whereas participants viewing Tetris rated themselves as likely to do better on tests of spatial memory.

This is a critically important finding for intervention research. Many researchers, myself included, have often thought of the expectation and demand characteristic confounds in a rather general way. Generally speaking until recently I wouldn’t have expected the expectation bias to go much beyond a general “I’m doing something effective” belief. Boot et al show that our participants are a good deal cleverer than that, forming expectations-for-improvement that map onto specific dimensions of training. This means that to the degree that an experimenter’s hypothesis can be discerned from either the training or the test, participants are likely to form unbalanced expectations.

The good news is that the authors provide several reasonable fixes for this dilemma. The first is just to actually measure participant’s expectations, specifically in relation to the measures of interest. Another useful suggestion is to run pilot studies ensuring that the two treatments do not evoke differential expectations, or similarly to check that your outcome measures are not subject to these biases. Boot and colleagues throw the proverbial glove down, daring readers to attempt experiments where the “control condition” actually elicits greater expectations yet the treatment effect is preserved. Further common concerns, such as worries about balancing false positives against false negatives, are address at length.

The entire article is a great read, timely and full of excellent suggestions for caution in future research. It also brought something I’ve been chewing on for some time quite clearly into focus. From the general perspective of learning and plasticity, I have to ask at what point is an expectation no longer a confound. Boot et al give an interesting discussion on this point, in which they suggest that even in the case of balanced expectations and positive treatment effects, an expectation dependent response (in which outcome correlates with expectation) may still give cause for concern as to the causal efficacy of the trained task. This is a difficult question that I believe ventures far into the territory of what exactly constitutes the minimal necessary features for learning. As the authors point out, placebo and expectations effects are “real” products of the brain, with serious consequences for behavior and treatment outcome. Yet even in the medical community there is a growing understanding that such effects may be essential parts of the causal machinery of healing.

Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.
Possible outcome of a training experiment, in which the control shows no dependence between expectation and outcome (top panel) and the treatment of interest shows dependence (bottom panel). Boot et al suggest that such a case may invalidate causal claims for treatment efficacy.

To what extent might this also be true of learning or cognitive training? For sure we can assume that expectations shape training outcomes, otherwise the whole point about active controls would be moot. But can one really have meaningful learning if there is no expectation to improve? I realize that from an experimental/clinical perspective, the question is not “is expectation important for this outcome” but “can we observe a treatment outcome when expectations are balanced”. Still when we begin to argue that the observation of expectation-dependent responses in a balanced design might invalidate our outcome findings, I have to wonder if we are at risk of valuing methodology over phenomena. If expectation is a powerful, potentially central mechanism in the causal apparatus of learning and plasticity, we shouldn’t be surprised when even efficacious treatments are modulated by such beliefs. In the end I am left wondering if this is simply an inherent limitation in our attempt to apply the reductive apparatus of science to increasingly holistic domains.

Please do read the paper, as it is an excellent treatment of a critically ignored issue in the cognitive and clinical sciences. Anyone undertaking related work should expect this reference to appear in reviewer’s replies in the near future.

EDIT:
Professor Simons, a co-author of the paper, was nice enough to answer my question on twitter. Simons pointed out that a study that balanced expectation, found group outcome differences, and further found correlations of those differences with expectation could conclude that the treatment was causally efficacious, but that it also depends on expectations (effect + expectation). This would obviously be superior to an unbalanced designed or one without measurement of expectation, as it would actually tell us something about the importance of expectation in producing the causal outcome. Be sure to read through the very helpful FAQ they’ve posted as an addendum to the paper, which covers these questions and more in greater detail. Here is the answer to my specific question:

What if expectations are necessary for a treatment to work? Wouldn’t controlling for them eliminate the treatment effect?

No. We are not suggesting that expectations for improvement must be eliminated entirely. Rather, we are arguing for the need to equate such expectations across conditions. Expectations can still affect the treatment condition in a double-blind, placebo-controlled design. And, it is possible that some treatments will only have an effect when they interact with expectations. But, the key to that design is that the expectations are equated across the treatment and control conditions. If the treatment group outperforms the control group, and expectations are equated, then something about the treatment must have contributed to the improvement. The improvement could have resulted from the critical ingredients of the treatment alone or from some interaction between the treatment and expectations. It would be possible to isolate the treatment effect by eliminating expectations, but that is not essential in order to claim that the treatment had an effect.

In a typical psychology intervention, expectations are not equated between the treatment and control condition. If the treatment group improves more than the control group, we have no conclusive evidence that the ingredients of the treatment mattered. The improvement could have resulted from the treatment ingredients alone, from expectations alone, or from an interaction between the two. The results of any intervention that does not equate expectations across the treatment and control condition cannot provide conclusive evidence that the treatment was necessary for the improvement. It could be due to the difference in expectations alone. That is why double blind designs are ideal, and it is why psychology interventions must take steps to address the shortcomings that result from the impossibility of using a double blind design. It is possible to control for expectation differences without eliminating expectations altogether.

Can compassion be trained like a muscle? Active-controlled fMRI of compassion meditation.

Among the cognitive training literature, meditation interventions are particularly unique in that they often emphasize emotional or affective processing at least as much as classical ‘top-down’ attentional control. From a clinical and societal perspective, the idea that we might be able to “train” our “emotion muscle” is an attractive one. Recently much has been made of the “empathy deficit” in the US, ranging from empirical studies suggesting a relationship between quality-of-care and declining caregiver empathy, to a recent push by President Obama to emphasize the deficit in numerous speeches.

While much of the training literature focuses on cognitive abilities like sustained attention and working memory, many investigating meditation training have begun to study the plasticity of affective function, myself included.  A recent study by Helen Weng and colleagues in Wisconsin investigated just this question, asking if compassion (“loving-kindness”) meditation can alter altruistic behavior and associated neural processing. Her study is one of the first of its kind, in that rather than merely comparing groups of advanced practitioners and controls, she utilized a fully-randomized active-controlled design to see if compassion responds to brief training in novices while controlling for important confounds.

As many readers should be aware, a chronic problem in training studies is a lack of properly controlled longitudinal design. At best, many rely on “passive” or “no-contact” controls who merely complete both measurements without receiving any training. Even in the best of circumstances “active” controls are often poorly matched to whatever is being emphasized and tested in the intervention of interest. While having both groups do “something” is better than a passive or no-control design, problems may still arise if the measure of interest is mismatched to the demand characteristics of the study.  Stated simply, if your condition of interest receives attention training and attention tests, and your control condition receives dieting instruction or relaxation, you can expect group differences to be confounded by an explicit “expectation to improve” in the interest group.

In this regard Weng et al present an almost perfect example of everything a training study should be. Both interventions were delivered via professionally made audio CDs (you can download them yourselves here!), with participants’ daily practice experiences being recorded online. The training materials were remarkably well matched for the tests of interest and extra care was taken to ensure that the primary measures were not presented in a biased way. The only thing they could have done further would be a single blind (making sure the experimenters didn’t know the group identity of each participant), but given the high level of difficulty in blinding these kinds of studies I don’t blame them for not undertaking such a manipulation. In all the study is extremely well-controlled for research in this area and I recommend it as a guideline for best practices in training research.

Specifically, Weng et al tested the impact of loving-kindness compassion meditation or emotion reappraisal training on an emotion regulation fMRI task and behavioral economic game measuring altruistic behavior. For the fMRI task, participants viewed emotional pictures (IAPS) depicting suffering or neutral scenarios and either practiced a compassion meditation or reappraisal strategy to regulate their emotional response, before and after training. After the follow-up scan, good-old fashion experimental deception was used to administer a dictator economics-game that was ostensibly not part of the primary study and involved real live players (both deceptions).

For those not familiar with the dictator game, the concept is essentially that a participant watches a “dictator” endowed with 100$ give “unfair” offers to a “victim” without any money. Weng et al took great care in contextualizing the test purely in economic terms, limiting demand confounds:

Participants were told that they were playing the game with live players over the Internet. Effects of demand characteristics on behavior were minimized by presenting the game as a unique study, describing it in purely economic terms, never instructing participants to use the training they received, removing the physical presence of players and experimenters during game play, and enforcing real monetary consequences for participants’ behavior.

This is particularly important, as without these simple manipulations it would be easy for stodgy reviewers like myself to worry about subtle biases influencing behavior on the task. Equally important is the content of the two training programs. If for example, Weng et al used a memory training or attention task as their active-control group, it would be difficult not to worry that behavioral differences were due to one group expecting a more emotional consequence of the study, and hence acting more altruistic. In the supplementary information, Weng et al describe the two training protocols in great detail:

Compassion

… Participants practiced compassion for targets by 1) contemplating and envisioning their suffering and then 2) wishing them freedom from that suffering. They first practiced compassion for a Loved One, such as a friend or family member. They imagined a time their loved one had suffered (e.g., illness, injury, relationship problem), and were instructed to pay attention to the emotions and sensations this evoked. They practiced wishing that the suffering were relieved and repeated the phrases, “May you be free from this suffering. May you have joy and happiness.” They also envisioned a golden light that extended from their heart to the loved one, which helped to ease his/her suffering. They were also instructed to pay attention to bodily sensations, particularly around the heart. They repeated this procedure for the Self, a Stranger, and a Difficult Person. The Stranger was someone encountered in daily life but not well known (e.g., a bus driver or someone on the street), and the Difficult Person was someone with whom there was conflict (e.g., coworker, significant other). Participants envisioned hypothetical situations of suffering for the stranger and difficult person (if needed) such as having an illness or experiencing a failure. At the end of the meditation, compassion was extended towards all beings. For each new meditation session, participants could choose to use either the same or different people for each target category (e.g., for the loved one category, use sister one day and use father the next day).

Reappraisal

… Participants were asked to recall a stressful experience from the past 2 years that remained upsetting to them, such as arguing with a significant other or receiving a lower-than- expected grade. They were instructed to vividly recall details of the experience (location, images, sounds). They wrote a brief description of the event, and chose one word to best describe the feeling experienced during the event (e.g., sad, angry, anxious). They rated the intensity of the feeling during the event, and the intensity of the current feeling on a scale (0 = No feeling at all, 100 = Most intense feeling in your life). They wrote down the thoughts they had during the event in detail. Then they were asked to reappraise the event (to think about it in a different, less upsetting way) using 3 different strategies, and to write down the new thoughts. The strategies included 1) thinking about the situation from another person’s perspective (e.g., friend, parent), 2) viewing it in a way where they would respond with very little emotion, and 3) imagining how they would view the situation if a year had passed, and they were doing very well. After practicing each strategy, they rated how reasonable each interpretation was (0 = Not at all reasonable, 100 = Completely reasonable), and how badly they felt after considering this view (0 = Not bad at all, 100 = Most intense ever). Day to day, participants were allowed to practice reappraisal with the same stressful event, or choose a different event. Participants logged the amount of minutes practiced after the session.

In my view the active control is extremely well designed for the fMRI and economic tasks, with both training methods explicitly focusing on the participant altering an emotional response to other individuals.  In tests of self-rated efficacy, both groups showed significant decreases in negative emotion, further confirming the active control. Interestingly when Weng et al compared self-ratings over time, only the compassion group showed significant reduction from the first half of training sessions to the last. I’m not sure if this constitutes a limitation, as Weng et al further report that on each individual training day the reappraisal group reported significant reductions, but that the reductions themselves did not differ significantly over time. They explain this as being likely due to the fact that the reappraisal group frequently changed emotional targets, whereas the compassion group had the same 3 targets throughout the training. Either way the important point is that both groups self-reported similar overall reductions in negative emotion during the course of the study, strongly supporting the active control.

Now what about the findings? As mentioned above, Weng et al tested participants before and after training on an fMRI emotion regulation task. After the training, all participants performed the “dictator game”, shown below. After rank-ordering the data, they found that the compassion group showed significantly greater redistribution:

The dictator task (left) and increased redistribution (right).

For the fMRI analysis, they analyzed BOLD responses to negative vs neutral images at both time points, subtracted the beta coefficients, and then entered these images into a second-level design matrix testing the group difference, with the rank-ordered redistribution scores as a covariate of interest. They then tested for areas showing group differences in the correlation of redistribution scores and changes of BOLD response to negative vs neutral images (pre vs post), across the whole brain and in several ROIs, while properly correcting for multiple comparisons. Essentially this analysis asks, where in the brain do task-related changes in BOLD correlate more or less with the redistribution score in one group or another. For the group x covariate interaction they found significant differences (increased BOLD-covariate correlation) in the right inferior parietal cortex (IPC), a region of the parietal attention network, shown on the left-hand panel:

Increased correlation between negative vs neutral imagery related BOLD and redistribution scores (left), connectivity with DLPFC (right).

They further extracted signal from the IPC cluster and entered it into a conjunction analysis, testing for areas showing significant correlation  with the IPC activity, and found a strong effect in right DLPFC (right panel). Finally they performed a psychophysiological interaction (PPI) analysis with the right DLPFC activity as the seed, to determine regions showing significant task-modulated connectivity with that DLPFC activity. The found increased emotion-modulated DLPFC connectivity to nucleus accumbens, a region involved in encoding positive rewards (below, right).

Screen shot 2013-05-23 at 3.21.15 PM
PPI shows increased emotion-modulated connectivity of nucleus accumbens and DLPFC.

Together these results implicate training-related BOLD activity increases to emotional stimuli in the parietal attention network and increased parietal connectivity with regions implicated in cognitive control and reward processing, in the observed altruistic behavior differences. The authors conclude that compassion training may alter emotional processing through a novel mechanism, where top-down central-executive circuits redirect emotional information to areas associated with positive reward, reflecting the role of compassion meditation in emphasizing increased positive emotion to the aversive states of others. A fitting and interesting conclusion, I think.

Overall, the study should receive high marks for its excellent design and appropriate statistical rigor. There is quite a bit of interesting material in the supplementary info, a strategy I dislike, but that is no fault of the authors considering the publishing journal (Psych Science). The question itself is extremely novel, in terms of previous active-controlled studies. To date only one previous active-controlled study investigated the role of compassion meditation on empathy-related neuroplasticity. However that study compared compassion meditation with a memory strategy course, which (in my opinion) exposes it to serious criticism regarding demand characteristic. The authors do reference that study, but only briefly to state that both studies support a role of compassion training in altering positive emotion- personally I would have appreciated a more thorough comparison, though I suppose I can go and to that myself if I feel so inclined :).

The study does have a few limitations worth mentioning. One thing that stood out to me was that the authors never report the results of the overall group mean contrast for negative vs neutral images. I would have liked to know if the regions showing increased correlation with redistribution actually showed higher overall mean activation increases during emotion regulation. However as the authors clearly had quite specific hypotheses, leading them to  restrict their alpha to 0.01 (due to testing 1 whole-brain contrast and 4 ROIs), I can see why they left this out. Given the strong results of the study, it would in retrospect perhaps have been more prudent to skip  the ROI analysis (which didn’t seem to find much) and instead focus on testing the whole brain results.  I can’t blame them however, as it is surprising not to see anything going on in insula or amygdala for this kind of training.  It is also a bit unclear to me why the DLPFC was used as the PPI seed as opposed to the primary IPC cluster, although I am somewhat unfamiliar with the conjunction-connectivity analysis used here. Finally, as the authors themselves point out, a major limitation of the study is that the redistribution measure was collected only at time two, preventing a comparison to baseline for this measure.

Given the methodological state of the topic (quite poor, generally speaking), I am willing to grant them these mostly minor caveats. Of course, without a baseline altruism measure it is difficult to make a strong conclusion about the causal impact of the meditation training on altruism behavior, but at least their neural data are shielded from this concern. So while we can’t exhaustively conclude that compassion can be trained, the results of this study certainly suggest it is possible and perhaps even likely, providing a great starting point for future research. One interesting thing for me was the difference in DLPFC. We also found task-related increases in dorsolateral prefrontal cortex following active-controlled meditation, although in the left hemisphere and for a very different kind of training and task. One other recent study of smoking cessation also reported alteration in DLPFC following mindfulness training, leading me to wonder if we’re seeing the emergence of empirical consensus for this region’s specific involvement in meditation training. Another interesting point for me was that affective regulation here seems to involve primarily top-down or attention related neural correlates,  suggesting that bottom-up processing (insula, amygdala) may be more resilient to brief training, something we also found in our study. I wonder if the group mean-contrasts would have been revealing here (i.e. if there were differences in bottom-up processing that don’t correlate with redistribution). All together a great study that raises the bar for training research in cognitive neuroscience!

Mental Training and Neuroplasticity – PhD Complete!

I was asked to write a brief summary of my PhD research for our annual CFIN report. I haven’t blogged in a while and it turned out to be a decent little blurb, so I figured I might as well share it here. Enjoy!

In the past decade, reports concerning the natural plasticity of the human brain have taken a spotlight in the media and popular imagination. In the pursuit of neural plasticity nearly every imaginable specialization, from taxi drivers to Buddhist monks, has had their day in the scanner. These studies reveal marked functional and structural neural differences between various populations of interest, and in doing so drive a wave of interest in harnessing the brain’s plasticity for rehabilitation, education, and even increasing intelligence (Green and Bavelier, 2008). Under this new “mental training” research paradigm investigators are now examining what happens to brain and behavior when novices are randomized to a training condition, using longitudinal brain imaging.

Image1_training

These studies highlight a few promising domains for harnessing neural plasticity, particularly in the realm of visual attention, cognitive control, and emotional training. By randomizing novices to a brief ‘dose’ of action video game or meditation training, researchers can go beyond mere cross-section and make inferences regarding the causality of training on observed neural outcomes. Initial results are promising, suggesting that domains of great clinical relevance such as emotional and attentional processing are amenable to training (Lutz et al., 2008a; Lutz et al., 2008b; Bavelier et al., 2010). However, these findings are currently obscured by a host of methodological limitations.

These span from behavioral confounds (e.g. motivation and demand characteristic) to inadequate longitudinal processing of brain images, which present particular challenges not found in within-subject or cross-sectional design (Davidson, 2010; Jensen et al., 2011). The former can be addressed directly by careful construction of “active control” groups. Here both comparison and control groups receive putatively effective treatments, carefully designed to isolate the hypothesized “active-ingredients” involved in behavioral and neuroplasticity outcomes. In this way researchers can simultaneously make inferences in terms of mechanistic specificity while excluding non-specific confounds such as social support, demand, and participant motivation.

image2_meditationbrainWe set out to investigate one particularly popular intervention, mindfulness meditation, while controlling for these factors. Mindfulness meditation has enjoyed a great deal of research interest in recent years. This popularity is largely due to promising findings indicating good efficacy of meditation training (MT) for emotion processing and cognitive control (Sedlmeier et al., 2012). Clinical studies indicate that MT may be particularly effective for disorders that are typically non-responsive to cognitive-behavioral therapy, such as severe depression and anxiety (Grossman et al., 2004; Hofmann et al., 2010). Understanding the neural mechanism underlying such benefits remains difficult however, as most existing investigations are cross-sectional in nature or depend upon inadequate “wait-list” passive control groups.

We addressed these difficulties in an investigation of functional and structural neural plasticity before and after a 6-week active-controlled mindfulness intervention. To control demand, social support, teacher enthusiasm, and participant motivation we constructed a “shared reading and listening” active control group for comparison to MT. By eliciting daily “experience samples” regarding participants’ motivation to practice and minutes practiced, we ensured that groups did not differ on common motivational confounds.

We found that while both groups showed equivalent improvement on behavioral response-inhibition and meta-cognitive measures, only the MT group significantly reduced affective-Stroop conflict reaction times (Allen et al., 2012). Further we found that MT participants show significantly greater increases in recruitment of dorsolateral prefrontal cortex than did controls, a region implicated in cognitive control and working memory. Interestingly we did not find group differences in emotion-related reaction times or BOLD activity; instead we found that fronto-insula and medial-prefrontal BOLD responses in the MT group were significantly more correlated with practice than in controls. These results indicate that while brief MT is effective for training attention-related neural mechanisms, only participants with the greatest amount of practice showed altered neural responses to negative affective stimuli. This result is important because it underlines the differential response of various target skills to training and suggests specific applications of MT depending on time and motivation constraints.

MT related increase in DLPFC activity during affective stroop task.
MT related increase in DLPFC activity during affective stroop task.

In a second study, we utilized a longitudinally optimized pipeline to assess structural neuroplasticity in the same cohort as described above (Ashburner and Ridgway, 2012). A crucial issue in longitudinal voxel-based morphometry and similar methods is the prevalence of “asymmetrical preprocessing”, for example where normalization parameters are calculated from baseline images and applied to follow-up images, resulting in inflated risk of false-positive results. We thus applied a totally symmetrical deformation-based morphometric pipeline to assess training related expansions and contractions of gray matter volume. While we found significant increases within the MT group, these differences did not survive group-by-time comparison and thus may represent false positives; it is likely that such differences would not be ruled out by an asymmetric pipeline or non-active controlled designed. These results suggest that brief MT may act only on functional neuroplasticity and that greater training is required for more lasting anatomical alterations.

These projects are a promising advance in our understanding of neural plasticity and mental training, and highlight the need for careful methodology and control when investigating such phenomena. The investigation of neuroplasticity mechanisms may one day revolutionize our understanding of human learning and neurodevelopment, and we look forward to seeing a new wave of carefully controlled investigations in this area.

You can read more about the study in this blog post, where I explain it in detail. 

A happy day, my PhD defense!
A happy day, my PhD defense!

References

Allen M, Dietz M, Blair KS, van Beek M, Rees G, Vestergaard-Poulsen P, Lutz A, Roepstorff A (2012) Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention. The Journal of Neuroscience 32:15601-15610.

Ashburner J, Ridgway GR (2012) Symmetric diffeomorphic modeling of longitudinal structural MRI. Frontiers in neuroscience 6.

Bavelier D, Levi DM, Li RW, Dan Y, Hensch TK (2010) Removing brakes on adult brain plasticity: from molecular to behavioral interventions. The Journal of Neuroscience 30:14964-14971.

Davidson RJ (2010) Empirical explorations of mindfulness: conceptual and methodological conundrums. Emotion 10:8-11.

Green C, Bavelier D (2008) Exercising your brain: a review of human brain plasticity and training-induced learning. Psychology and Aging; Psychology and Aging 23:692.

Grossman P, Niemann L, Schmidt S, Walach H (2004) Mindfulness-based stress reduction and health benefits: A meta-analysis. Journal of Psychosomatic Research 57:35-43.

Hofmann SG, Sawyer AT, Witt AA, Oh D (2010) The effect of mindfulness-based therapy on anxiety and depression: A meta-analytic review. Journal of consulting and clinical psychology 78:169.

Jensen CG, Vangkilde S, Frokjaer V, Hasselbalch SG (2011) Mindfulness training affects attention—or is it attentional effort?

Lutz A, Brefczynski-Lewis J, Johnstone T, Davidson RJ (2008a) Regulation of the neural circuitry of emotion by compassion meditation: effects of meditative expertise. PLoS One 3:e1897.

Lutz A, Slagter HA, Dunne JD, Davidson RJ (2008b) Attention regulation and monitoring in meditation. Trends Cogn Sci 12:163-169.

Sedlmeier P, Eberth J, Schwarz M, Zimmermann D, Haarig F, Jaeger S, Kunze S (2012) The psychological effects of meditation: A meta-analysis.

Active-controlled, brief body-scan meditation improves somatic signal discrimination.

Here in the science blog-o-sphere we often like to run to the presses whenever a laughably bad study comes along, pointing out all the incredible feats of ignorance and sloth. However, this can lead to science-sucks cynicism syndrome (a common ailment amongst graduate students), where one begins to feel a bit like all the literature is rubbish and it just isn’t worth your time to try and do something truly proper and interesting. If you are lucky, it is at this moment that a truly excellent paper will come along at the just right time to pick up your spirits and re-invigorate your work. Today I found myself at one such low-point, struggling to figure out why my data suck, when just such a beauty of a paper appeared in my RSS reader.

data_sensing (1)The paper, “Brief body-scan meditation practice improves somatosensory perceptual decision making”, appeared in this month’s issue of Consciousness and Cognition. Laura Mirams et al set out to answer a very simple question regarding the impact of meditation training (MT) on a “somatic signal detection task” (SSDT). The study is well designed; after randomization, both groups received audio CDs with 15 minutes of daily body-scan meditation or excerpts from The Lord of The Rings. For the SSD task, participants simply report when they felt a vibration stimulus on the finger, where the baseline vibration intensity is first individually calibrated to a 50% detection rate. The authors then apply a signal-detection analysis framework to discern the sensitivity or d’ and decision criteria c.

Mirams et al found that, even when controlling for a host of baseline factors including trait mindfulness and baseline somatic attention, MT led to a greater increase in d’ driven by significantly reduced false-alarms. Although many theorists and practitioners of MT suggest a key role for interoceptive & somatic attention in related alterations of health, brain, and behavior, there exists almost no data addressing this prediction, making these findings extremely interesting. The idea that MT should impact interoception and somatosensation is very sensible- in most (novice) meditation practices it is common to focus attention to bodily sensations of, for example, the breath entering the nostril. Further, MT involves a particular kind of open, non-judgemental awareness of bodily sensations, and in general is often described to novice students as strengthening the relationship between the mind and sensations of the body. However, most existing studies on MT investigate traditional exteroceptive, top-down elements of attention such as conflict resolution and the ability to maintain attention fixation for long periods of time.

While MT certainly does involve these features, it is arguable that the interoceptive elements are more specific to the precise mechanisms of interest (they are what you actually train), whereas the attentional benefits may be more of a kind of side effect, reflecting an early emphasis in MT on establishing attention. Thus in a traditional meditation class, you might first learn some techniques to fixate your attention, and then later learn to deploy your attention to specific bodily targets (i.e. the breath) in a particular way (non-judgmentally). The goal is not necessarily to develop a super-human ability to filter distractions, but rather to change the way in which interoceptive responses to the world (i.e. emotional reactions) are perceived and responded to. This hypothesis is well reflected in the elegant study by Mirams et al; they postulate specifically that MT will lead to greater sensitivity (d’), driven by reduced false alarms rather than an increased hit-rate, reflecting a greater ability to discriminate the nature of an interoceptive signal from noise (note: see comments for clarification on this point by Steve Fleming – there is some ambiguity in interpreting the informational role of HR and FA in d’). This hypothesis not only reflects the theoretically specific contribution of MT (beyond attention training, which might be better trained by video games for example), but also postulates a mechanistically specific hypothesis to test this idea, namely that MT leads to a shift specifically in the quality of interoceptive signal processing, rather than raw attentional control.

At this point, you might ask if everyone is so sure that MT involves training interoception, why is there so little data on the topic? The authors do a great job reviewing findings (even including currently in-press papers) on interoception and MT. Currently there is one major null finding using the canonical heartbeat detection task, where advanced practitioners self-reported improved heart beat detection but in reality performed at chance. Those authors speculated that the heartbeat task might not accurately reflect the modality of interoception engaged in by practitioners. In addition a recent study investigated somatic discrimination thresholds in a cross-section of advanced practitioners and found that the ability to make meta-cognitive assessments of ones’ threshold sensitivity correlated with years of practice. A third recent study showed greater tactile sensation acuity in practitioners of Tai Chi.  One longitudinal study [PDF], a wait-list controlled fMRI investigation by Farb et al, found that a mindfulness-based stress reduction course altered BOLD responses during an attention-to-breath paradigm. Collectively these studies do suggest a role of MT in training interoception. However, as I have complained of endlessly, cross-sections cannot tell us anything about the underlying causality of the observed effects, and longitudinal studies must be active-controlled (not waitlisted) to discern mechanisms of action. Thus active-controlled longitudinal designs are desperately needed, both to determine the causality of a treatment on some observed effect, and to rule out confounds associated with motivation, demand-characteristic, and expectation. Without such a design, it is very difficult to conclude anything about the mechanisms of interest in an MT intervention.

In this regard, Mirams went above and beyond the call of duty as defined by the average paper. The choice of delivering the intervention via CD is excellent, as we can rule out instructor enthusiasm/ability confounds. Further the intervention chosen is extremely simple and well described; it is just a basic body-scan meditation without additional fluff or fanfare, lending to mechanistic specificity. Both groups were even instructed to close their eyes and sit when listening, balancing these often overlooked structural factors. In this sense, Mirams et al have controlled for instruction, motivation, intervention context, baseline trait mindfulness, and even isolated the variable of interest- only the MT group worked with interoception, though both exerted a prolonged period of sustained attention. Armed with these controls we can actually say that MT led to an alteration in interoceptive d’, through a mechanism dependent upon on the specific kind of interoceptive awareness trained in the intervention.

It is here that I have one minor nit-pick of the paper. Although the use of Lord of the Rings audiotapes is with precedent, and likely a great control for attention and motivation, you could be slightly worried that reading about Elves and Orcs is not an ideal control for listening to hours of tapes instructing you to focus on your bodily sensations, if the measure of interest involves fixating on the body. A pure active control might have been a book describing anatomy or body parts; then we could exhaustively conclude that not only is it interoception driving the findings, but the particular form of interoceptive attention deployed by meditation training. As it is, a conservative person might speculate that the observed differences reflect demand characteristics- MT participants deploy more attention to the body due to a kind of priming mechanism in the teaching. However this is an extreme nitpick and does not detract from the fact that Mirams and co-authors have made an extremely useful contribution to the literature. In the future it would be interesting to repeat the paradigm with a more body-oriented control, and perhaps also in advanced practitioners before and after an intensive retreat to see if the effect holds at later stages of training. Of course, given my interest in applying signal-detection theory to interoceptive meta-cognition, I also cannot help but wonder what the authors might have found if they’d applied a Fleming-style meta-d’ analysis to this study.

All in all, a clear study with tight methods, addressing a desperately under-developed research question, in an elegant fashion. The perfect motivation to return to my own mangled data ☺

Correcting your naughty insula: modelling respiration, pulse, and motion artifacts in fMRI

important update: Thanks to commenter “DS”, I discovered that my respiration-related data was strongly contaminated due to mechanical error. The belt we used is very susceptible to becoming uncalibrated, if the subject moves or breathes very deeply for example. When looking at the raw timecourse of respiration I could see that many subjects, included the one displayed here, show a great deal of “clipping” in the timeseries. For the final analysis I will not use the respiration regressors, but rather just the pulse and motion. Thanks DS!

As I’m working my way through my latest fMRI analysis, I thought it might be fun to share a little bit of that here. Right now i’m coding up a batch pipeline for data from my Varela-award project, in which we compared “adept” meditation practitioners with motivation, IQ, age, and gender-matched controls on a response-inhibition and error monitoring task. One thing that came up in the project proposal meeting was a worry that, since meditation practitioners spend so much time working with the breath, they might respirate differently either at rest or during the task. As I’ve written about before, respiration and other related physiological variables such as cardiac-pulsation induced motion can seriously impact your fMRI results (when your heart beats, the veins in your brain pulsate, creating slight but consistent and troublesome MR artifacts). As you might expect, these artifacts tend to be worse around the main draining veins of the brain, several of which cluster around the frontoinsular and medial-prefrontal/anterior cingulate cortices. As these regions are important for response-inhibition and are frequently reported in the meditation literature (without physiological controls), we wanted to try to control for these variables in our study.

disclaimer: i’m still learning about noise modelling, so apologies if I mess up the theory/explanation of the techniques used! I’ve left things a bit vague for that reason. See bottom of article for references for further reading. To encourage myself to post more of these “open-lab notes” posts, I’ve kept the style here very informal, so apologies for typos or snafus. 😀

To measure these signals, we used the respiration belt and pulse monitor that come standard with most modern MRI machines. The belt is just a little elastic hose that you strap around the chest wall of the subject, where it can record expansions and contractions of the chest to give a time series corresponding to respiration, and the pulse monitor a standard finger clip. Although I am not an expert on physiological noise modelling, I will do my best to explain the basic effects you want to model out of your data. These “non-white” noise signals include pulsation and respiration-induced motion (when you breath, you tend to nod your head just slightly along the z-axis), typical motion artifacts, and variability of pulsation and respiration. To do this I fed my physiological parameters into an in-house function written by Torben Lund, which incorporates a RETROICOR transformation of the pulsation and respiration timeseries. We don’t just use the raw timeseries due to signal aliasing- the phsyio data needs to be shifted to make each physiological event correspond to a TR. The function also calculates the respiratory volume time delay (RVT), a measure developed by Rasmus Birn, to model the variability in physiological parameters1. Variability in respiration and pulse volume (if one group of subjects tend to inhale sharply for some conditions but not others, for example) is more likely to drive BOLD artifacts than absolute respiratory volume or frequency (if one group of subjects tend to inhale sharply for some conditions but not others, for example). Finally, as is standard, I included the realignment parameters to model subject motion-related artifacts. Here is a shot of my monster design matrix for one subject:

DM_NVR

You can see that the first 7 columns model my conditions (correct stops, unaware errors, aware errors, false alarms, and some self-report ratings), the next 20 model the RETROICOR transformed pulse and respiration timeseries, 41 columns for RVT, 6 for realignment pars, and finally my session offsets and constant. It’s a big DM, but since we have over 1000 degrees of freedom, i’m not too worried about all the extra regressors in terms of loss of power. What would be worrisome is if for example stop activity correlated strongly with any of the nuisance variables –  we can see from the orthogonality plot that in this subject at least, that is not the case. Now lets see if we actually have anything interesting left over after we remove all that noise:

stop SPM

We can see that the Stop-related activity seems pretty reasonable, clustering around the motor and premotor cortex, bilateral insula, and DLPFC, all canonical motor inhibition regions (FWE-cluster corrected p = 0.05). This is a good sign! Now what about all those physiological regressors? Are they doing anything of value, or just sucking up our power? Here is the f-contrast over the pulse regressors:

pulse

Here we can see that the peak signal is wrapped right around the pons/upper brainstem. This makes a lot of sense- the area is full of the primary vasculature that ferries blood into and out of the brain. If I was particularly interested in getting signal from the brainstem in this project, I could use a respiration x pulse interaction regressor to better model this6. Penny et al find similar results to our cardiac F-test when comparing AR(1) with higher order AR models [7]. But since we’re really only interested in higher cortical areas, the pulse regressor should be sufficient. We can also see quite a bit of variance explained around the bilateral insula and rostral anterior cingulate. Interestingly, our stop-related activity still contained plenty of significant insula response, so we can feel better that some but not all of the signal from that region is actually functionally relevant. What about respiration?

resp

Here we see a ton of variance explained around the occipital lobe. This makes good sense- we tend to just slightly nod our head back and forth along the z-axis as we breath. What we are seeing is the motion-induced artifact of that rotation, which is most severe along the back of the head and periphery of the brain. We see a similar result for the overall motion regressors, but flipped to the front:

Ignore the above, respiration regressor is not viable due to “clipping”, see note at top of post. Glad I warned everyone that this post was “in progress” 🙂 Respiration should be a bit more global, restricted to ventricles and blood vessels.

motion

Wow, look at all the significant activity! Someone call up Nature and let them know, motion lights up the whole brain! As we would expect, the motion regressor explains a ton of uninteresting variance, particularly around the prefrontal cortex and periphery.

I still have a ways to go on this project- obviously this is just a single subject, and the results could vary wildly. But I do think even at this point we can start to see that it is quite easy and desirable to model these effects in your data (Note: we had some technical failure due to the respiration belt being a POS…) I should note that in SPM, these sources of “non-white” noise are typically modeled using an autoregressive (AR(1)) model, which is enabled in the default settings (we’ve turned it off here). However as there is evidence that this model performs poorly at faster TRs (which are the norm now), and that a noise-modelling approach can greatly improve SnR while removing artifacts, we are likely to get better performance out of a nuisance regression technique as demonstrated here [4]. The next step will be to take these regressors to a second level analysis, to examine if the meditation group has significantly more BOLD variance-explained by physiological noise than do controls. Afterwards, I will re-run the analysis without any physio parameters, to compare the results of both.

References:


1. Birn RM, Diamond JB, Smith MA, Bandettini PA.
Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI.
Neuroimage. 2006 Jul 15;31(4):1536-48. Epub 2006 Apr 24.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

3. Glover GH, Li TQ, Ress D.
Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR.
Magn Reson Med. 2000 Jul;44(1):162-7.

4. Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE.
Non-white noise in fMRI: does modelling have an impact?
Neuroimage. 2006 Jan 1;29(1):54-66.

5. Wise RG, Ide K, Poulin MJ, Tracey I.
Resting fluctuations in arterial carbon dioxide induce significant low frequency variations in BOLD signal.
Neuroimage. 2004 Apr;21(4):1652-64.

2. Brooks J.C.W., Beckmann C.F., Miller K.L. , Wise R.G., Porro C.A., Tracey I., Jenkinson M.
Physiological noise modelling for spinal functional magnetic resonance imaging studies
NeuroImage in press: DOI: doi: 10.1016/j.neuroimage.2007.09.018

7. Penny, W., Kiebel, S., & Friston, K. (2003). Variational Bayesian inference for fMRI time series. NeuroImage, 19(3), 727–741. doi:10.1016/S1053-8119(03)00071-5

PubPeer – A universal comment and review layer for scholarly papers?

Lately I’ve had a plethora of discussions with colleagues concerning the possible benefits of a reddit-like “democratic review layer”, which would index all scholarly papers and let authenticated users post reviews subject to karma. We’ve navel-gazed about various implementations ranging from a full out reddit clone, a wiki, or even a full blown torrent tracker with rated comments and mass piracy. So you can imagine I was pleasantly surprised to see someone actually went ahead and put together a simple app to do exactly that.

Image

Pubpeer states that it’s mission is to “create an online community that uses the publication of scientific results as an opening for fruitful discussion.” Users create accounts using an academic email address and must have at least one first-author publication to join. Once registered any user can leave anonymous comments on any article, which are themselves subject to up/down votes and replies.

My first action was of course to search for my own name:

Image

Hmm, no comments. Let’s fix that:

Image

Hah! Peer review is easy! Just kidding, I deleted this comment after testing to see if it was possible. Ostensibly this is so authors can reply to comments, but it does raise some concerns that one can just leave whatever ratings you like on your own papers. In theory with enough users, good comments will be quickly distinguished from bad, regardless of who makes them.  In theory… 

This is what an article looks like in PubPeer with a few comments:

Image

Pretty simple- any paper can be found in the database and users then leave comments associated with those papers. On the one hand I really like the simplicity and usability of PubPeer. I think any endeavor along these lines must very much follow the twitter design mentality of doing one (and only one) thing really well. I also like the use of threaded comments and upvote/downvotes but I would like to see child comments being subject to votes. I’m not sure if I favor the anonymous approach the developers went for- but I can see costs and benefits to both public and anonymous comments, so I don’t have any real suggestions there.

What I found really interesting was just to see this idea in practice. While I’ve discussed it endlessly, a few previously unforeseen worries leaped out right away. After browsing a few articles it seems (somewhat unsurprisingly) that most of the comments are pretty negative and nit-picky. Considering that most early adopters of such a system are likely to be graduate students, this isn’t too surprising. For one thing there is no such entity as a perfect paper, and graduate students are often fans of these kind of boilerplate nit-picks that form the ticks and fleas of any paper. If comments add mostly doubt and negativity to papers, it seems like the whole commenting process would become a lot of extra work for little author pay-off, since no matter what your article is going to end up looking bad.

In a traditional review, a paper’s flaws and merits are assessed privately and then the final (if accepted) paper is generally put forth as a polished piece of research that stands on it’s on merits. If a system like PubPeer were popular, becoming highly commented would almost certainly mean having tons of nitpicky and highly negative comments associated to that manuscript. This could manipulate reader perceptions- highly commented PubPeer articles would receive fewer citations regardless of their actual quality.

So that bit seems very counter-productive to me and I am not sure of the solution. It might be something similar to establishing light top-down comment moderation and a sort of “reddiquette” or user code of conduct that emphasizes fair and balanced comments (no sniping). Or, perhaps my “worry” isn’t actually troubling at all. Maybe such a system would be substantially self-policing and refreshing, shifting us from an obsession with ‘perfect papers’ to an understanding that no paper (or review) should be judged on anything but it’s own merits. Given the popularity of pun threads on reddit, i’m not convinced the wholly democratic solution will work. Whatever the result, as with most solutions to scholarly publishing, it seems clear that if PubPeer is to add substantial value to peer review then a critical mass of active users is the crucial missing ingredient.

What do you think? I’d love to hear your thoughts in the comments.

Mindfulness and neuroplasticity – summary of my recent paper.

First, let me apologize for an overlong hiatus from blogging. I submitted my PhD thesis October 1st, and it turns out that writing two papers and a thesis in the space of about three months can seriously burn out the old muse. I’ve coaxed her back through gentle offerings of chocolate, caffeine, and a bit of videogame binging. As long as I promise not to bring her within a mile of a dissertation, I believe we’re good for at least a few posts per month.

With that taken care of, I am very happy to report the successful publication of my first fMRI paper, published last month in the Journal of Neuroscience. The paper was truly a labor of love taking nearly 3 years to complete and countless hours of head-scratching work. In the end I am quite happy with the finished product, and I do believe my colleagues and I managed to produce a useful result for the field of mindfulness training and neuroplasticity.

note: this post ended up being quite long. if you are already familiar with mindfulness research, you may want to skip ahead!

Why mindfulness?

First, depending on what brought you here, you may already be wondering why mindfulness is an interesting subject, particularly for a cognitive neuroscientist. In light of the large gaps regarding our understanding of the neurobiological foundations of neuroimaging, is it really the right time to apply these complex tools to meditation?  Can we really learn anything about something as potentially ambiguous as “mindfulness”? Although we have a long way to go, and these are certainly fair questions, I do believe that the study of meditation has a lot to contribute to our understanding of cognition and plasticity.

Generally speaking, when you want to investigate some cognitive phenomena, a firm understanding of your target is essential to successful neuroimaging. Areas with years of behavioral research and concrete theoretical models make for excellent imaging subjects, as in these cases a researcher can hope to fall back on a sort of ‘ground truth’ to guide them through the neural data, which are notoriously ambiguous and difficult to interpret. Of course well-travelled roads also have their disadvantages, sometimes providing a misleading sense of security, or at least being a bit dry. While mindfulness research still has a ways to go, our understanding of these practices is rapidly evolving.

At this point it helps to stop and ask, what is meditation (and by extension, mindfulness)? The first thing to clarify is that there is no such thing as “meditation”- rather meditation is really term describing a family resemblance of highly varied practices, covering an array of both spiritual and secular practices. Meditation or “contemplative” practices have existed for more than a thousand years and are found in nearly every spiritual tradition. More recently, here in the west our unending fascination of the esoteric has lead to a popular rise in Yoga, Tai Chi, and other physically oriented contemplative practices, all of which incorporate an element of meditation.

At the simplest level of description [mindfulness] meditation is just a process of becoming aware, whether through actual sitting meditation, exercise, or daily rituals.  Meditation (as a practice) was first popularized in the west during the rise of transcendental meditation (TM). As you can see in the figure below, interest in TM lead to an early boom in research articles. This boom was not to last, as it was gradually realized that much of this initially promising research was actually the product of zealous insiders, conducted with poor controls and in some cases outright data fabrication. As TM became known as  a cult, meditation research underwent a dark age where publishing on the topic could seriously damage a research career. We can see also that around the 1990’s, this trend started to reverse as a new generation of researchers began investigating “mindfulness” meditation.

pubmed graphy thing
Sidenote: research everywhere is expanding. Shouldn’t we start controlling these highly popular “pubs over time” figures for total publishing volume? =)

It’s easy to see from the above why when Jon Kabat-Zinn re-introduced meditation to the West, he relied heavily on the medical community to develop a totally secularized intervention-oriented version of meditation strategically called “mindfulness-based stress reduction.” The arrival of MBSR was closely related to the development of mindfulness-based cognitive therapy (MBCT), a revision of cognitive-behavioral therapy utilizing mindful practices and instruction for a variety of clinical applications. Mindfulness practice is typically described as involving at least two practices; focused attention (FA) and open monitoring (OM). FA can be described as simply noticing when attention wanders from a target (the breath, the body, or a flower for example) and gently redirecting it back to that target. OM is typically (but not always) trained at an later stage, building on the attentional skills developed in FA practice to gradually develop a sense of “non-judgmental open awareness”. While a great deal of work remains to be done, initial cognitive-behavioral and clinical research on mindfulness training (MT) has shown that these practices can improve the allocation of attentional resources, reduce physiological stress, and improve emotional well-being. In the clinic MT appears to effectively improve symptoms on a variety of pathological syndromes including anxiety and depression, at least as well as standard CBT or pharmacological treatments.

Has the quality of research on meditation improved since the dark days of TM? When answering this question it is important to note two things about the state of current mindfulness research. First, while it is true that many who research MT are also practitioners, the primary scholars are researchers who started in classical areas (emotion, clinical psychiatry, cognitive neuroscience) and gradually became involved in MT research. Further, most funding today for MT research comes not from shady religious institutions, but from well-established funding bodies such as the National Institute of Health and European Research Council. It is of course important to be aware of the impact prior beliefs can have on conducting impartial research, but with respect to today’s meditation and mindfulness researchers, I believe that most if not all of the work being done is honest, quality research.

However, it is true that much of the early MT research is flawed on several levels. Indeed several meta-analyses have concluded that generally speaking, studies of MT have often utilized poor design – in one major review only 8/22 studies met criteria for meta-analysis. The reason for this is quite simple- in the absence of pilot data, investigators had to begin somewhere. Typically it doesn’t bode well to jump into unexplored territory with an expensive, large sample, fully randomized design. There just isn’t enough to go off of- how would you know which kind of process to even measure? Accordingly, the large majority of mindfulness research to date has utilized small-scale, often sub-optimal experimental design, sacrificing experimental control in order build a basic idea of the cognitive landscape. While this exploratory research provides a needed foundation for generating likely hypotheses, it is also difficult to make any strong conclusions so long as methodological issues remain.

Indeed, most of what we know about these mindfulness and neuroplasticity comes from studies of either advanced practitioners (compared to controls) or “wait-list” control studies where controls receive no intervention. On the basis of the findings from these studies, we had some idea how to target our investigation, but there remained a nagging feeling of uncertainty. Just how much of the literature would actually replicate? Does mindfulness alter attention through mere expectation and motivation biases (i.e. placebo-like confounds), or can MT actually drive functionally relevant attentional and emotional neuroplasticity, even when controlling for these confounds?

The name of the game is active-control

Research to date links mindfulness practices to alterations in health and physiology, cognitive control, emotional regulation, responsiveness to pain, and a large array of positive clinical outcomes. However, the explicit nature of mindfulness training makes for some particularly difficult methodological issues. Group cross-sectional studies, where advanced practitioners are compared to age-matched controls, cannot provide causal evidence. Indeed, it is always possible that having a big fancy brain makes you more likely to spend many years meditating, and not that meditating gives you a big fancy brain. So training studies are essential to verifying the claim that mindfulness actually leads to interesting kinds of plasticity. However, unlike with a new drug study or computerized intervention, you cannot simply provide a sugar pill to the control group. Double-blind design is impossible; by definition subjects will know they are receiving mindfulness. To actually assess the impact of MT on neural activity and behavior, we need to compare to groups doing relatively equivalent things in similar experimental contexts. We need an active control.

There is already a well-established link between measurement outcome and experimental demands. What is perhaps less appreciated is that cognitive measures, particularly reaction time, are easily biased by phenomena like the Hawthorne effect, where the amount of attention participants receive directly contributes to experimental outcome. Wait-lists simply cannot overcome these difficulties. We know for example, that simply paying controls a moderate performance-based financial reward can erase attentional reaction-time differences. If you are repeatedly told you’re training attention, then come experiment time you are likely expect this to be true and try harder than someone who has received no such instruction. The same is true of emotional tasks; subjects told frequently they are training compassion are likely to spend more time fixating on emotional stimuli, leading to inflated self-reports and responses.

I’m sure you can quickly see how it is extremely important to control for these factors if we are to isolate and understand the mechanisms important for mindfulness training. One key solution is active-control, that is providing both groups (MT and control) with a “treatment” that is at least nominally as efficacious as the thing you are interested in. Active-control allows you exclude numerous factors from your outcome, potentially including the role of social support, expectation, and experimental demands. This is exactly what we set out to do in our study, where we recruited 60 meditation-naïve subjects, scanned them on an fMRI task, randomized them to either six weeks of MT or active-control, and then measured everything again. Further, to exclude confounds relating to social interaction, we came up with a particularly unique control activity- reading Emma together.

Jane Austen as Active Control – theory of mind vs interoception

To overcome these confounds, we constructed a specialized control intervention. As it was crucial that both groups believed in their training, we needed an instructor who could match the high level of enthusiasm and experience found in our meditation instructors. We were lucky to have the help of local scholar Mette Stineberg, who suggested a customized “shared reading” group to fit our purposes. Reading groups are a fun, attention demanding exercise, with purported benefits for stress and well-being. While these claims have not been explicitly tested, what mattered most was that Mette clearly believed in their efficacy- making for a perfect control instructor. Mette holds a PhD in literature, and we knew that her 10 years of experience participating in and leading these groups would help us to exclude instructor variables from our results.

With her help, we constructed a special condition where participants completed group readings of Jane Austin’s Emma. A sensible question to ask at this point is – “why Emma?” An essential element of active control is variable isolation, or balancing your groups in such way that, with the exception of your hypothesized “active ingredient”, the two interventions are extremely similar. As MT is thought to depend on a particular kind of non-judgmental, interoceptive kind of attention, Chris and Uta Frith suggested during an early meeting that Emma might be a perfect contrast. For those of you who haven’t read the novel, the plot is brimming over with judgment-heavy theory-of-mind-type exposition. Mette further helped to ensure a contrast with MT by emphasizing discussion sessions focused on character motives. In this way we were able to ensure that both groups met for the same amount of time each week, with equivalently talented and passionate instructors, and felt that they were working towards something worthwhile. Finally, we made sure to let every participant know at recruitment that they would receive one of two treatments intended to improve attention and well-being, and that any benefits would depend upon their commitment to the practice. To help them practice at home, we created 20-minute long CD’s for both groups, one with a guided meditation and the other with a chapter from Emma.

Unlike previous active-controlled studies that typically rely on relaxation training, reading groups depend upon a high level of social-interaction. Reading together allowed us not only to exclude treatment context and expectation from our results, but also more difficult effects of social support (the “making new friends” variable). To measure this, we built a small website for participants to make daily reports of their motivation and minutes practiced that day. As you can see in the figure below, when we averaged these reports we found that not only did the reading group practice significantly more than those in MT, but that they expressed equivalent levels of motivation to practice. Anecdotally we found that reading-group members expressed a high level of satisfaction with their class, with a sub-group of about 8 even continued their meetings after our study concluded. The meditation group by comparison, did not appear to form any lasting social relationships and did not continue meeting after the study. We were very happy with these results, which suggest that it is very unlikely our results could be explained by unbalanced motivation or expectation.

Impact of MT on attention and emotion

After we established that active control was successful, the first thing to look at was some of our outside-the-scanner behavioral results. As we were interested in the effect of meditation on both attention and meta-cognition, we used an “error-awareness task” (EAT) to examine improvement in these areas. The EAT (shown below) is a typical “go-no/go” task where subjects spend most of their time pressing a button. The difficult part comes whenever a “stop-trial” occurs and subject must quickly halt their response. In the case where the subject fails to stop, they then have the opportunity to “fix” the error by pressing a second button on the trial following the error. If you’ve ever taken this kind of task, you know that it can be frustratingly difficult to stop your finger in time – the response becomes quite habitual. Using the EAT we examined the impact of MT on both controlling responses (a variable called “stop accuracy”), as well as also on meta-cognitive self-monitoring (percent “error-awareness”).

The error-awareness task

We started by looking for significant group by time interactions on stop accuracy and error-awareness, which indicate that score fluctuation on a measure was statistically greater in the treatment (MT) group than in the control group. In repeated-measures design, this type of interaction is your first indication that the treatment may have had a greater effect than the control group. When we looked at the data, it was immediately clear that while both groups improved over time (a ‘main effect’ of time), there was no interaction to be found:

Group x time analysis of SA and EA.

While it is likely that much of the increase over time can be explained by test-retest effects (i.e. simply taking the test twice), we wanted to see if any of this variance might be explained by something specific to meditation. To do this we entered stop accuracy and error-awareness into a linear model comparing the difference of slope between each group’s practice and the EAT measures. Here we saw that practice predicted stop accuracy improvement only in the meditation group, and that the this relationship was statistically greater than in the reading group:

Practice vs Stop accuracy (MT only shown). We did of course test our interaction, see paper for GLM goodness =)

These results lead us to conclude that while we did not observe a treatment effect of MT on the error-awareness task, the presence of strong time effects and MT-only correlation with practice suggested that the improvements within each group may relate to the “active ingredients” of MT but reflect motivation-driven artifacts in the reading group. Sadly we cannot conclude this firmly- we’d have needed to include a third passive control group for comparison. Thankfully this was pointed out to us by a kind reviewer, who noted that this argument is kind of like having one’s cake and eating it, so we’ll restrict ourselves to arguing that the EAT finding serves as a nice validation of the active control- both groups improved on something, and a potential indicator of a stop-related treatment mechanism.

While the EAT served as a behavioral measure of basic cognitive processes, we also wanted to examine the neural correlates of attention and emotion, to see how they might respond to mindfulness training in our intervention. For this we partnered with Karina Blair at the National Institute of Mental Health to bring the Affective Stroop task (shown below) to Denmark .

Affective Stroop Trial Scheme

The Affective Stroop Task (AST) depends on a basic “number-counting Stroop” to investigate the neural correlates of attention, emotion, and their interaction. To complete the task, your instruction is simply “count the number of numbers in the first display (of numbers), count the number of numbers in the second display, and decide which display had more number of numbers”.  As you can see in the trial example above, conflict in the task (trial-type “C”) is driven by incongruence between the Arabic numeral (e.g. “4”) and the numeracy of the display (a display of 5 “4”’s). Meanwhile, each trial has nasty or neutral emotional stimuli selected from the international affective picture system. Using the AST, we were able to examine the neural correlates of executive attention by contrasting task (B + C > A) and emotion (negative > neutral) trials.

Since we were especially interested in changes over time, we expanded on these contrasts to examine increased or decreased neural response between the first and last scans of the study. To do this we relied on two levels of analysis (standard in imaging), where at the “first” or “subject level” we examined differences between the two time points for each condition (task and emotion), within each subject. We then compared these time-related effects (contrast images) between each group using a two-sample t-test with total minutes of practice as a co-variate. To assess the impact of meditation on performing the AST, we examined reaction times in a model with factors group, time, task, and emotion. In this way we were able to examine the impact of MT on neural activity and behavior while controlling for the kinds of artifacts discussed in the previous section.

Our analysis revealed three primary findings. First, the reaction time analysis revealed a significant effect of MT on Stroop conflict, or the difference between reaction time to incongruent versus congruent trials. Further, we did not observe any effect on emotion-related RTs- although both groups sped up significantly to negative trials vs neutral (time effect), this increase was equivalent in both groups. Below you can see the stroop-conflict related RTs:

Stroop conflict result

This became particularly interesting when we examine the neural response to these conditions, and again observed a pattern of overall [BOLD signal] increases in the dorsolateral prefrontal cortex to task performance (below):

DLPFC increase to task

Interestingly, we did not observe significant overall increases to emotional stimuli  just being in the MT group didn’t seem to be enough to change emotional processing. However, when we examined correlations with amount practice and increased BOLD to negative emotion across the whole brain, we found a striking pattern of fronto-insular BOLD increases to negative images, similar to patterns seen in previous studies of compassion and mindfulness practice:

Greater association of prefrontal-insular response to negative emotion and practice
Greater association of prefrontal-insular response to negative emotion and practice.

When we put all this together, a pattern began to emerge. Overall it seemed like MT had a relatively clear impact on attention and cognitive control. Practice-correlated increases on EAT stop accuracy, reduced Affective Stroop conflict, and increases in dorsolateral prefrontal cortex responses to task all point towards plasticity at the level of executive function. In contrast our emotion-related findings suggest that alterations in affective processing occurred only in MT participants with the most practice. Given how little we know about the training trajectories of cognitive vs affective skills, we felt that this was a very interesting result.

Conclusion: the more you do, the what you get?

For us, the first conclusion from all this was that when you control for motivation and a host of other confounds, brief MT appears to primarily train attention-related processes. Secondly, alterations in affective processing seemed to require more practice to emerge. This is interesting both for understanding the neuroscience of training and for the effective application of MT in clinical settings. While a great deal of future research is needed, it is possible that the affective system may be generally more resilient to intervention than attention. It may be the case that altering affective processes depends upon and extends increasing control over executive function. Previous research suggests that attention is largely flexible, amenable to a variety of training regimens of which MT is only one beneficial intervention. However we are also becoming increasingly aware that training attention alone does not seem to directly translate into closely related benefits.

As we begin to realize that many societal and health problems cannot be solved through medication or attention-training alone, it becomes clear that techniques to increase emotional function and well-being are crucial for future development.  I am reminded of a quote overheard at the Mind & Life Summer Research Institute and attributed to the Dalai Lama. Supposedly when asked about their goal of developing meditation programs in the west, HHDL replied that, what was truly needed in the West was not “cognitive training, as (those in the west) are already too clever. What is needed rather is emotion training, to cultivate a sense of responsibility and compassion”. When we consider falling rates of empathy in medical practitioners and the link to health outcome, I think we do need to explore the role of emotional and embodied skills in supporting a wide-array of functions in cognition and well-being. While emotional development is likely to depend upon executive function, given all the recent failures to show a transfer from training these domains to even closely related ones, I suspect we need to begin including affective processes in our understanding of optimal learning. If these differences hold, then it may be important to reassess our interventions (mindful and otherwise), developing training programs that are customized in terms of the intensity, duration, and content appropriate for any given context.

Of course, rather than end on such an inspiring note, I should point out that like any study, ours is not without flaws (you’ll have to read the paper to find out how many 😉 ) and is really just an initial step. We made significant progress in replicating common neural and behavioral effects of MT while controlling for important confounds, but in retrospect the study could have been strengthened by including measures that would better distinguish the precise mechanisms, for example a measure of body awareness or empathy. Another element that struck me was how much I wish we’d had a passive control group, which could have helped flesh out how much of our time effect was instrument reliability versus motivation. As far as I am concerned, the study was a success and I am happy to have done my part to push mindfulness research towards methodological clarity and rigor. In the future I know others will continue this trend and investigate exactly what sorts of practice are needed to alter brain and behavior, and just how these benefits are accomplished.

In the near-future, I plan to give mindfulness research a rest. Not that I don’t find it fascinating or worthwhile, but rather because during the course of my PhD I’ve become a bit obsessed with interoception and meta-cognition. At present, it looks like I’ll be spending my first post-doc applying predictive coding and dynamic causal modeling to these processes. With a little luck, I might be able to build a theoretical model that could one day provide novel targets for future intervention!

Link to paper:

Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention

Thanks to all the collaborators and colleagues who made this study possible.

Special thanks to Kate Mills (@le_feufollet) for proofing this post 🙂