Active-controlled, brief body-scan meditation improves somatic signal discrimination.

Here in the science blog-o-sphere we often like to run to the presses whenever a laughably bad study comes along, pointing out all the incredible feats of ignorance and sloth. However, this can lead to science-sucks cynicism syndrome (a common ailment amongst graduate students), where one begins to feel a bit like all the literature is rubbish and it just isn’t worth your time to try and do something truly proper and interesting. If you are lucky, it is at this moment that a truly excellent paper will come along at the just right time to pick up your spirits and re-invigorate your work. Today I found myself at one such low-point, struggling to figure out why my data suck, when just such a beauty of a paper appeared in my RSS reader.

data_sensing (1)The paper, “Brief body-scan meditation practice improves somatosensory perceptual decision making”, appeared in this month’s issue of Consciousness and Cognition. Laura Mirams et al set out to answer a very simple question regarding the impact of meditation training (MT) on a “somatic signal detection task” (SSDT). The study is well designed; after randomization, both groups received audio CDs with 15 minutes of daily body-scan meditation or excerpts from The Lord of The Rings. For the SSD task, participants simply report when they felt a vibration stimulus on the finger, where the baseline vibration intensity is first individually calibrated to a 50% detection rate. The authors then apply a signal-detection analysis framework to discern the sensitivity or d’ and decision criteria c.

Mirams et al found that, even when controlling for a host of baseline factors including trait mindfulness and baseline somatic attention, MT led to a greater increase in d’ driven by significantly reduced false-alarms. Although many theorists and practitioners of MT suggest a key role for interoceptive & somatic attention in related alterations of health, brain, and behavior, there exists almost no data addressing this prediction, making these findings extremely interesting. The idea that MT should impact interoception and somatosensation is very sensible- in most (novice) meditation practices it is common to focus attention to bodily sensations of, for example, the breath entering the nostril. Further, MT involves a particular kind of open, non-judgemental awareness of bodily sensations, and in general is often described to novice students as strengthening the relationship between the mind and sensations of the body. However, most existing studies on MT investigate traditional exteroceptive, top-down elements of attention such as conflict resolution and the ability to maintain attention fixation for long periods of time.

While MT certainly does involve these features, it is arguable that the interoceptive elements are more specific to the precise mechanisms of interest (they are what you actually train), whereas the attentional benefits may be more of a kind of side effect, reflecting an early emphasis in MT on establishing attention. Thus in a traditional meditation class, you might first learn some techniques to fixate your attention, and then later learn to deploy your attention to specific bodily targets (i.e. the breath) in a particular way (non-judgmentally). The goal is not necessarily to develop a super-human ability to filter distractions, but rather to change the way in which interoceptive responses to the world (i.e. emotional reactions) are perceived and responded to. This hypothesis is well reflected in the elegant study by Mirams et al; they postulate specifically that MT will lead to greater sensitivity (d’), driven by reduced false alarms rather than an increased hit-rate, reflecting a greater ability to discriminate the nature of an interoceptive signal from noise (note: see comments for clarification on this point by Steve Fleming – there is some ambiguity in interpreting the informational role of HR and FA in d’). This hypothesis not only reflects the theoretically specific contribution of MT (beyond attention training, which might be better trained by video games for example), but also postulates a mechanistically specific hypothesis to test this idea, namely that MT leads to a shift specifically in the quality of interoceptive signal processing, rather than raw attentional control.

At this point, you might ask if everyone is so sure that MT involves training interoception, why is there so little data on the topic? The authors do a great job reviewing findings (even including currently in-press papers) on interoception and MT. Currently there is one major null finding using the canonical heartbeat detection task, where advanced practitioners self-reported improved heart beat detection but in reality performed at chance. Those authors speculated that the heartbeat task might not accurately reflect the modality of interoception engaged in by practitioners. In addition a recent study investigated somatic discrimination thresholds in a cross-section of advanced practitioners and found that the ability to make meta-cognitive assessments of ones’ threshold sensitivity correlated with years of practice. A third recent study showed greater tactile sensation acuity in practitioners of Tai Chi.  One longitudinal study [PDF], a wait-list controlled fMRI investigation by Farb et al, found that a mindfulness-based stress reduction course altered BOLD responses during an attention-to-breath paradigm. Collectively these studies do suggest a role of MT in training interoception. However, as I have complained of endlessly, cross-sections cannot tell us anything about the underlying causality of the observed effects, and longitudinal studies must be active-controlled (not waitlisted) to discern mechanisms of action. Thus active-controlled longitudinal designs are desperately needed, both to determine the causality of a treatment on some observed effect, and to rule out confounds associated with motivation, demand-characteristic, and expectation. Without such a design, it is very difficult to conclude anything about the mechanisms of interest in an MT intervention.

In this regard, Mirams went above and beyond the call of duty as defined by the average paper. The choice of delivering the intervention via CD is excellent, as we can rule out instructor enthusiasm/ability confounds. Further the intervention chosen is extremely simple and well described; it is just a basic body-scan meditation without additional fluff or fanfare, lending to mechanistic specificity. Both groups were even instructed to close their eyes and sit when listening, balancing these often overlooked structural factors. In this sense, Mirams et al have controlled for instruction, motivation, intervention context, baseline trait mindfulness, and even isolated the variable of interest- only the MT group worked with interoception, though both exerted a prolonged period of sustained attention. Armed with these controls we can actually say that MT led to an alteration in interoceptive d’, through a mechanism dependent upon on the specific kind of interoceptive awareness trained in the intervention.

It is here that I have one minor nit-pick of the paper. Although the use of Lord of the Rings audiotapes is with precedent, and likely a great control for attention and motivation, you could be slightly worried that reading about Elves and Orcs is not an ideal control for listening to hours of tapes instructing you to focus on your bodily sensations, if the measure of interest involves fixating on the body. A pure active control might have been a book describing anatomy or body parts; then we could exhaustively conclude that not only is it interoception driving the findings, but the particular form of interoceptive attention deployed by meditation training. As it is, a conservative person might speculate that the observed differences reflect demand characteristics- MT participants deploy more attention to the body due to a kind of priming mechanism in the teaching. However this is an extreme nitpick and does not detract from the fact that Mirams and co-authors have made an extremely useful contribution to the literature. In the future it would be interesting to repeat the paradigm with a more body-oriented control, and perhaps also in advanced practitioners before and after an intensive retreat to see if the effect holds at later stages of training. Of course, given my interest in applying signal-detection theory to interoceptive meta-cognition, I also cannot help but wonder what the authors might have found if they’d applied a Fleming-style meta-d’ analysis to this study.

All in all, a clear study with tight methods, addressing a desperately under-developed research question, in an elegant fashion. The perfect motivation to return to my own mangled data ☺

Mindfulness and neuroplasticity – summary of my recent paper.

First, let me apologize for an overlong hiatus from blogging. I submitted my PhD thesis October 1st, and it turns out that writing two papers and a thesis in the space of about three months can seriously burn out the old muse. I’ve coaxed her back through gentle offerings of chocolate, caffeine, and a bit of videogame binging. As long as I promise not to bring her within a mile of a dissertation, I believe we’re good for at least a few posts per month.

With that taken care of, I am very happy to report the successful publication of my first fMRI paper, published last month in the Journal of Neuroscience. The paper was truly a labor of love taking nearly 3 years to complete and countless hours of head-scratching work. In the end I am quite happy with the finished product, and I do believe my colleagues and I managed to produce a useful result for the field of mindfulness training and neuroplasticity.

note: this post ended up being quite long. if you are already familiar with mindfulness research, you may want to skip ahead!

Why mindfulness?

First, depending on what brought you here, you may already be wondering why mindfulness is an interesting subject, particularly for a cognitive neuroscientist. In light of the large gaps regarding our understanding of the neurobiological foundations of neuroimaging, is it really the right time to apply these complex tools to meditation?  Can we really learn anything about something as potentially ambiguous as “mindfulness”? Although we have a long way to go, and these are certainly fair questions, I do believe that the study of meditation has a lot to contribute to our understanding of cognition and plasticity.

Generally speaking, when you want to investigate some cognitive phenomena, a firm understanding of your target is essential to successful neuroimaging. Areas with years of behavioral research and concrete theoretical models make for excellent imaging subjects, as in these cases a researcher can hope to fall back on a sort of ‘ground truth’ to guide them through the neural data, which are notoriously ambiguous and difficult to interpret. Of course well-travelled roads also have their disadvantages, sometimes providing a misleading sense of security, or at least being a bit dry. While mindfulness research still has a ways to go, our understanding of these practices is rapidly evolving.

At this point it helps to stop and ask, what is meditation (and by extension, mindfulness)? The first thing to clarify is that there is no such thing as “meditation”- rather meditation is really term describing a family resemblance of highly varied practices, covering an array of both spiritual and secular practices. Meditation or “contemplative” practices have existed for more than a thousand years and are found in nearly every spiritual tradition. More recently, here in the west our unending fascination of the esoteric has lead to a popular rise in Yoga, Tai Chi, and other physically oriented contemplative practices, all of which incorporate an element of meditation.

At the simplest level of description [mindfulness] meditation is just a process of becoming aware, whether through actual sitting meditation, exercise, or daily rituals.  Meditation (as a practice) was first popularized in the west during the rise of transcendental meditation (TM). As you can see in the figure below, interest in TM lead to an early boom in research articles. This boom was not to last, as it was gradually realized that much of this initially promising research was actually the product of zealous insiders, conducted with poor controls and in some cases outright data fabrication. As TM became known as  a cult, meditation research underwent a dark age where publishing on the topic could seriously damage a research career. We can see also that around the 1990’s, this trend started to reverse as a new generation of researchers began investigating “mindfulness” meditation.

pubmed graphy thing
Sidenote: research everywhere is expanding. Shouldn’t we start controlling these highly popular “pubs over time” figures for total publishing volume? =)

It’s easy to see from the above why when Jon Kabat-Zinn re-introduced meditation to the West, he relied heavily on the medical community to develop a totally secularized intervention-oriented version of meditation strategically called “mindfulness-based stress reduction.” The arrival of MBSR was closely related to the development of mindfulness-based cognitive therapy (MBCT), a revision of cognitive-behavioral therapy utilizing mindful practices and instruction for a variety of clinical applications. Mindfulness practice is typically described as involving at least two practices; focused attention (FA) and open monitoring (OM). FA can be described as simply noticing when attention wanders from a target (the breath, the body, or a flower for example) and gently redirecting it back to that target. OM is typically (but not always) trained at an later stage, building on the attentional skills developed in FA practice to gradually develop a sense of “non-judgmental open awareness”. While a great deal of work remains to be done, initial cognitive-behavioral and clinical research on mindfulness training (MT) has shown that these practices can improve the allocation of attentional resources, reduce physiological stress, and improve emotional well-being. In the clinic MT appears to effectively improve symptoms on a variety of pathological syndromes including anxiety and depression, at least as well as standard CBT or pharmacological treatments.

Has the quality of research on meditation improved since the dark days of TM? When answering this question it is important to note two things about the state of current mindfulness research. First, while it is true that many who research MT are also practitioners, the primary scholars are researchers who started in classical areas (emotion, clinical psychiatry, cognitive neuroscience) and gradually became involved in MT research. Further, most funding today for MT research comes not from shady religious institutions, but from well-established funding bodies such as the National Institute of Health and European Research Council. It is of course important to be aware of the impact prior beliefs can have on conducting impartial research, but with respect to today’s meditation and mindfulness researchers, I believe that most if not all of the work being done is honest, quality research.

However, it is true that much of the early MT research is flawed on several levels. Indeed several meta-analyses have concluded that generally speaking, studies of MT have often utilized poor design – in one major review only 8/22 studies met criteria for meta-analysis. The reason for this is quite simple- in the absence of pilot data, investigators had to begin somewhere. Typically it doesn’t bode well to jump into unexplored territory with an expensive, large sample, fully randomized design. There just isn’t enough to go off of- how would you know which kind of process to even measure? Accordingly, the large majority of mindfulness research to date has utilized small-scale, often sub-optimal experimental design, sacrificing experimental control in order build a basic idea of the cognitive landscape. While this exploratory research provides a needed foundation for generating likely hypotheses, it is also difficult to make any strong conclusions so long as methodological issues remain.

Indeed, most of what we know about these mindfulness and neuroplasticity comes from studies of either advanced practitioners (compared to controls) or “wait-list” control studies where controls receive no intervention. On the basis of the findings from these studies, we had some idea how to target our investigation, but there remained a nagging feeling of uncertainty. Just how much of the literature would actually replicate? Does mindfulness alter attention through mere expectation and motivation biases (i.e. placebo-like confounds), or can MT actually drive functionally relevant attentional and emotional neuroplasticity, even when controlling for these confounds?

The name of the game is active-control

Research to date links mindfulness practices to alterations in health and physiology, cognitive control, emotional regulation, responsiveness to pain, and a large array of positive clinical outcomes. However, the explicit nature of mindfulness training makes for some particularly difficult methodological issues. Group cross-sectional studies, where advanced practitioners are compared to age-matched controls, cannot provide causal evidence. Indeed, it is always possible that having a big fancy brain makes you more likely to spend many years meditating, and not that meditating gives you a big fancy brain. So training studies are essential to verifying the claim that mindfulness actually leads to interesting kinds of plasticity. However, unlike with a new drug study or computerized intervention, you cannot simply provide a sugar pill to the control group. Double-blind design is impossible; by definition subjects will know they are receiving mindfulness. To actually assess the impact of MT on neural activity and behavior, we need to compare to groups doing relatively equivalent things in similar experimental contexts. We need an active control.

There is already a well-established link between measurement outcome and experimental demands. What is perhaps less appreciated is that cognitive measures, particularly reaction time, are easily biased by phenomena like the Hawthorne effect, where the amount of attention participants receive directly contributes to experimental outcome. Wait-lists simply cannot overcome these difficulties. We know for example, that simply paying controls a moderate performance-based financial reward can erase attentional reaction-time differences. If you are repeatedly told you’re training attention, then come experiment time you are likely expect this to be true and try harder than someone who has received no such instruction. The same is true of emotional tasks; subjects told frequently they are training compassion are likely to spend more time fixating on emotional stimuli, leading to inflated self-reports and responses.

I’m sure you can quickly see how it is extremely important to control for these factors if we are to isolate and understand the mechanisms important for mindfulness training. One key solution is active-control, that is providing both groups (MT and control) with a “treatment” that is at least nominally as efficacious as the thing you are interested in. Active-control allows you exclude numerous factors from your outcome, potentially including the role of social support, expectation, and experimental demands. This is exactly what we set out to do in our study, where we recruited 60 meditation-naïve subjects, scanned them on an fMRI task, randomized them to either six weeks of MT or active-control, and then measured everything again. Further, to exclude confounds relating to social interaction, we came up with a particularly unique control activity- reading Emma together.

Jane Austen as Active Control – theory of mind vs interoception

To overcome these confounds, we constructed a specialized control intervention. As it was crucial that both groups believed in their training, we needed an instructor who could match the high level of enthusiasm and experience found in our meditation instructors. We were lucky to have the help of local scholar Mette Stineberg, who suggested a customized “shared reading” group to fit our purposes. Reading groups are a fun, attention demanding exercise, with purported benefits for stress and well-being. While these claims have not been explicitly tested, what mattered most was that Mette clearly believed in their efficacy- making for a perfect control instructor. Mette holds a PhD in literature, and we knew that her 10 years of experience participating in and leading these groups would help us to exclude instructor variables from our results.

With her help, we constructed a special condition where participants completed group readings of Jane Austin’s Emma. A sensible question to ask at this point is – “why Emma?” An essential element of active control is variable isolation, or balancing your groups in such way that, with the exception of your hypothesized “active ingredient”, the two interventions are extremely similar. As MT is thought to depend on a particular kind of non-judgmental, interoceptive kind of attention, Chris and Uta Frith suggested during an early meeting that Emma might be a perfect contrast. For those of you who haven’t read the novel, the plot is brimming over with judgment-heavy theory-of-mind-type exposition. Mette further helped to ensure a contrast with MT by emphasizing discussion sessions focused on character motives. In this way we were able to ensure that both groups met for the same amount of time each week, with equivalently talented and passionate instructors, and felt that they were working towards something worthwhile. Finally, we made sure to let every participant know at recruitment that they would receive one of two treatments intended to improve attention and well-being, and that any benefits would depend upon their commitment to the practice. To help them practice at home, we created 20-minute long CD’s for both groups, one with a guided meditation and the other with a chapter from Emma.

Unlike previous active-controlled studies that typically rely on relaxation training, reading groups depend upon a high level of social-interaction. Reading together allowed us not only to exclude treatment context and expectation from our results, but also more difficult effects of social support (the “making new friends” variable). To measure this, we built a small website for participants to make daily reports of their motivation and minutes practiced that day. As you can see in the figure below, when we averaged these reports we found that not only did the reading group practice significantly more than those in MT, but that they expressed equivalent levels of motivation to practice. Anecdotally we found that reading-group members expressed a high level of satisfaction with their class, with a sub-group of about 8 even continued their meetings after our study concluded. The meditation group by comparison, did not appear to form any lasting social relationships and did not continue meeting after the study. We were very happy with these results, which suggest that it is very unlikely our results could be explained by unbalanced motivation or expectation.

Impact of MT on attention and emotion

After we established that active control was successful, the first thing to look at was some of our outside-the-scanner behavioral results. As we were interested in the effect of meditation on both attention and meta-cognition, we used an “error-awareness task” (EAT) to examine improvement in these areas. The EAT (shown below) is a typical “go-no/go” task where subjects spend most of their time pressing a button. The difficult part comes whenever a “stop-trial” occurs and subject must quickly halt their response. In the case where the subject fails to stop, they then have the opportunity to “fix” the error by pressing a second button on the trial following the error. If you’ve ever taken this kind of task, you know that it can be frustratingly difficult to stop your finger in time – the response becomes quite habitual. Using the EAT we examined the impact of MT on both controlling responses (a variable called “stop accuracy”), as well as also on meta-cognitive self-monitoring (percent “error-awareness”).

The error-awareness task

We started by looking for significant group by time interactions on stop accuracy and error-awareness, which indicate that score fluctuation on a measure was statistically greater in the treatment (MT) group than in the control group. In repeated-measures design, this type of interaction is your first indication that the treatment may have had a greater effect than the control group. When we looked at the data, it was immediately clear that while both groups improved over time (a ‘main effect’ of time), there was no interaction to be found:

Group x time analysis of SA and EA.

While it is likely that much of the increase over time can be explained by test-retest effects (i.e. simply taking the test twice), we wanted to see if any of this variance might be explained by something specific to meditation. To do this we entered stop accuracy and error-awareness into a linear model comparing the difference of slope between each group’s practice and the EAT measures. Here we saw that practice predicted stop accuracy improvement only in the meditation group, and that the this relationship was statistically greater than in the reading group:

Practice vs Stop accuracy (MT only shown). We did of course test our interaction, see paper for GLM goodness =)

These results lead us to conclude that while we did not observe a treatment effect of MT on the error-awareness task, the presence of strong time effects and MT-only correlation with practice suggested that the improvements within each group may relate to the “active ingredients” of MT but reflect motivation-driven artifacts in the reading group. Sadly we cannot conclude this firmly- we’d have needed to include a third passive control group for comparison. Thankfully this was pointed out to us by a kind reviewer, who noted that this argument is kind of like having one’s cake and eating it, so we’ll restrict ourselves to arguing that the EAT finding serves as a nice validation of the active control- both groups improved on something, and a potential indicator of a stop-related treatment mechanism.

While the EAT served as a behavioral measure of basic cognitive processes, we also wanted to examine the neural correlates of attention and emotion, to see how they might respond to mindfulness training in our intervention. For this we partnered with Karina Blair at the National Institute of Mental Health to bring the Affective Stroop task (shown below) to Denmark .

Affective Stroop Trial Scheme

The Affective Stroop Task (AST) depends on a basic “number-counting Stroop” to investigate the neural correlates of attention, emotion, and their interaction. To complete the task, your instruction is simply “count the number of numbers in the first display (of numbers), count the number of numbers in the second display, and decide which display had more number of numbers”.  As you can see in the trial example above, conflict in the task (trial-type “C”) is driven by incongruence between the Arabic numeral (e.g. “4”) and the numeracy of the display (a display of 5 “4”’s). Meanwhile, each trial has nasty or neutral emotional stimuli selected from the international affective picture system. Using the AST, we were able to examine the neural correlates of executive attention by contrasting task (B + C > A) and emotion (negative > neutral) trials.

Since we were especially interested in changes over time, we expanded on these contrasts to examine increased or decreased neural response between the first and last scans of the study. To do this we relied on two levels of analysis (standard in imaging), where at the “first” or “subject level” we examined differences between the two time points for each condition (task and emotion), within each subject. We then compared these time-related effects (contrast images) between each group using a two-sample t-test with total minutes of practice as a co-variate. To assess the impact of meditation on performing the AST, we examined reaction times in a model with factors group, time, task, and emotion. In this way we were able to examine the impact of MT on neural activity and behavior while controlling for the kinds of artifacts discussed in the previous section.

Our analysis revealed three primary findings. First, the reaction time analysis revealed a significant effect of MT on Stroop conflict, or the difference between reaction time to incongruent versus congruent trials. Further, we did not observe any effect on emotion-related RTs- although both groups sped up significantly to negative trials vs neutral (time effect), this increase was equivalent in both groups. Below you can see the stroop-conflict related RTs:

Stroop conflict result

This became particularly interesting when we examine the neural response to these conditions, and again observed a pattern of overall [BOLD signal] increases in the dorsolateral prefrontal cortex to task performance (below):

DLPFC increase to task

Interestingly, we did not observe significant overall increases to emotional stimuli  just being in the MT group didn’t seem to be enough to change emotional processing. However, when we examined correlations with amount practice and increased BOLD to negative emotion across the whole brain, we found a striking pattern of fronto-insular BOLD increases to negative images, similar to patterns seen in previous studies of compassion and mindfulness practice:

Greater association of prefrontal-insular response to negative emotion and practice
Greater association of prefrontal-insular response to negative emotion and practice.

When we put all this together, a pattern began to emerge. Overall it seemed like MT had a relatively clear impact on attention and cognitive control. Practice-correlated increases on EAT stop accuracy, reduced Affective Stroop conflict, and increases in dorsolateral prefrontal cortex responses to task all point towards plasticity at the level of executive function. In contrast our emotion-related findings suggest that alterations in affective processing occurred only in MT participants with the most practice. Given how little we know about the training trajectories of cognitive vs affective skills, we felt that this was a very interesting result.

Conclusion: the more you do, the what you get?

For us, the first conclusion from all this was that when you control for motivation and a host of other confounds, brief MT appears to primarily train attention-related processes. Secondly, alterations in affective processing seemed to require more practice to emerge. This is interesting both for understanding the neuroscience of training and for the effective application of MT in clinical settings. While a great deal of future research is needed, it is possible that the affective system may be generally more resilient to intervention than attention. It may be the case that altering affective processes depends upon and extends increasing control over executive function. Previous research suggests that attention is largely flexible, amenable to a variety of training regimens of which MT is only one beneficial intervention. However we are also becoming increasingly aware that training attention alone does not seem to directly translate into closely related benefits.

As we begin to realize that many societal and health problems cannot be solved through medication or attention-training alone, it becomes clear that techniques to increase emotional function and well-being are crucial for future development.  I am reminded of a quote overheard at the Mind & Life Summer Research Institute and attributed to the Dalai Lama. Supposedly when asked about their goal of developing meditation programs in the west, HHDL replied that, what was truly needed in the West was not “cognitive training, as (those in the west) are already too clever. What is needed rather is emotion training, to cultivate a sense of responsibility and compassion”. When we consider falling rates of empathy in medical practitioners and the link to health outcome, I think we do need to explore the role of emotional and embodied skills in supporting a wide-array of functions in cognition and well-being. While emotional development is likely to depend upon executive function, given all the recent failures to show a transfer from training these domains to even closely related ones, I suspect we need to begin including affective processes in our understanding of optimal learning. If these differences hold, then it may be important to reassess our interventions (mindful and otherwise), developing training programs that are customized in terms of the intensity, duration, and content appropriate for any given context.

Of course, rather than end on such an inspiring note, I should point out that like any study, ours is not without flaws (you’ll have to read the paper to find out how many 😉 ) and is really just an initial step. We made significant progress in replicating common neural and behavioral effects of MT while controlling for important confounds, but in retrospect the study could have been strengthened by including measures that would better distinguish the precise mechanisms, for example a measure of body awareness or empathy. Another element that struck me was how much I wish we’d had a passive control group, which could have helped flesh out how much of our time effect was instrument reliability versus motivation. As far as I am concerned, the study was a success and I am happy to have done my part to push mindfulness research towards methodological clarity and rigor. In the future I know others will continue this trend and investigate exactly what sorts of practice are needed to alter brain and behavior, and just how these benefits are accomplished.

In the near-future, I plan to give mindfulness research a rest. Not that I don’t find it fascinating or worthwhile, but rather because during the course of my PhD I’ve become a bit obsessed with interoception and meta-cognition. At present, it looks like I’ll be spending my first post-doc applying predictive coding and dynamic causal modeling to these processes. With a little luck, I might be able to build a theoretical model that could one day provide novel targets for future intervention!

Link to paper:

Cognitive-Affective Neural Plasticity following Active-Controlled Mindfulness Intervention

Thanks to all the collaborators and colleagues who made this study possible.

Special thanks to Kate Mills (@le_feufollet) for proofing this post 🙂

Quick post – Dan Dennett’s Brain talk on Free Will vs Moral Responsibility

As a few people have asked me to give some impression of Dan’s talk at the FIL Brain meeting today, i’m just going to jot my quickest impressions before I run off to the pub to celebrate finishing my dissertation today. Please excuse any typos as what follows is unedited! Dan gave a talk very similar to his previous one several months ago at the UCL philosophy department. As always Dan gave a lively talk with lots of funny moments and appeals to common sense. Here the focus was more on the media activities of neuroscientists, with some particularly funny finger wagging at Patrick Haggard and Chris Frith. Some good bits where his discussion of evidence that priming subjects against free will seems to make them more likely to commit immoral acts (cheating, stealing) and a very firm statement that neuroscience is being irresponsible complete with bombastic anti-free will quotes by the usual suspects. Although I am a bit rusty on the mechanics of the free will debate, Dennett essentially argued for a compatiblist  view of free will and determinism. The argument goes something like this: the basic idea that free will is incompatible with determinism comes from a mythology that says in order to have free will, an agent must be wholly unpredictable. Dennett argues that this is absurd, we only need to be somewhat unpredictable. Rather than being perfectly random free agents, Dennett argues that what really matters is moral responsibility pragmatically construed.  Dennett lists a “spec sheet” for constructing a morally responsible agent including “could have done otherwise, is somewhat unpredictable, acts for reasons, is subject to punishment…”. In essence Dan seems to be claiming that neuroscientists don’t really care about “free will”, rather we care about the pragmatic limits within which we feel comfortable entering into legal agreements with an agent. Thus the job of the neuroscientists is not to try to reconcile the folk and scientific views of “free will”, which isn’t interesting (on Dennett’s acocunt) anyway, but rather to describe the conditions under which an agent can be considered morally responsible. The take home message seemed to be that moral responsibility is essentially a political rather than metaphysical construct. I’m afraid I can’t go into terrible detail about the supporting arguments- to be honest Dan’s talk was extremely short on argumentation. The version he gave to the philosophy department was much heavier on technical argumentation, particularly centered around proving that compatibilism doesn’t contradict with “it could have been otherwise”. In all the talk was very pragmatic, and I do agree with the conclusions to some degree- that we ought to be more concerned with the conditions and function of “will” and not argue so much about the meta-physics of “free”. Still my inner philosopher felt that Dan is embracing some kind of basic logical contradiction and hand-waving it away with funny intuition pumps, which for me are typically unsatisfying.

For reference, here is the abstract of the talk:

Nothing—yet—in neuroscience shows we don’t have free will

Contrary to the recent chorus of neuroscientists and psychologists declaring that free will is an illusion, I’ll be arguing (not for the first time, but with some new arguments and considerations) that this familiar claim is so far from having been demonstrated by neuroscience that those who advance it are professionally negligent, especially given the substantial social consequences of their being believed by lay people. None of the Libet-inspired work has the drastic implications typically adduced, and in fact the Soon et al (2008) work, and its descendants, can be seen to demonstrate an evolved adaptation to enhance our free will, not threaten it. Neuroscientists are not asking the right questions about free will—or what we might better call moral competence—and once they start asking and answering the right questions we may discover that the standard presumption that all “normal” adults are roughly equal in moral competence and hence in accountability is in for some serious erosion. It is this discoverable difference between superficially similar human beings that may oblige us to make major revisions in our laws and customs. Do we human beings have free will? Some of us do, but we must be careful about imposing the obligations of our good fortune on our fellow citizens wholesale.

Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher’s description of behavior here is strikingly similar to accounts given by Friston’s Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.

What are the critical assumptions of neuroscience?

In light of all the celebration surrounding the discovery of a Higgs-like particle, I found it amusing that nearly 30 years ago Higg’s theory was rejected by CERN as ‘outlandish’. This got me to wondering, just how often is scientific consensus a bar to discovery? Scientists are only human, and as such can be just as prone to blindspots, biases, and herding behaviors as other humans. Clearly the scientific method and scientific consensus (e.g. peer review) are the tools we rely on to surmount these biases. Yet, every tool has it’s misuse, and sometimes the wisdom of the crowd is just the aggregate of all these biases.

At this point, David Zhou pointed out that when scientific consensus leads to rejection of correct viewpoints, it’s often due to the strong implicit assumptions that the dominant paradigm rests upon. Sometimes there are assumptions that support our theories which, due to a lack of either conceptual or methodological sophistication, are not amenable to investigation. Other times we simply don’t see them; when Chomsky famously wrote his review of Skinner’s verbal behavior, he simply put together all the pieces of the puzzle that were floating around, and in doing so destroyed a 20-year scientific consensus.

Of course, as a cognitive scientist studying the brain, I often puzzle over what assumptions I critically depend upon to do my work. In an earlier stage of my training, I was heavily inundated with ideas from the “embodied, enactive, extended” framework, where it is common to claim that the essential bias is an uncritical belief in the representational theory of mind. While I do still see problems in mainstream information theory, I’m no longer convinced that an essentially internalist, predictive-coding account of the brain is without merit. It seems to me that the “revolution” of externalist viewpoints turned out to be more of an exercise in house-keeping, moving us beyond overly simplistic “just-so” evolutionary diatribes,and  empty connectionism, to introducing concepts from dynamical systems to information theory in the context of cognition.

So, really i’d like to open this up: what do you think are assumptions neuroscientists cannot live without? I don’t want to shape the discussion too much, but here are a few starters off the top of my head:

  • Nativism: informational constraints are heritable and innate, learning occurs within these bounds
  • Representation: Physical information is transduced by the senses into abstract representations for cognition to manipulate
  • Frequentism: While many alternatives currently abound, for the most part I think many mainstream neuroscientists are crucially dependent on assessing differences in mean and slope. A related issue is a tendency to view variability as “noise”
  • Mental Chronometry: related to the representational theory of mind is the idea that more complex representations take longer to process and require more resources. Thus greater (BOLD/ERP/RT) equals a more complex process.
  • Evolution: for a function to exist it must be selected for by evolutionary natural selection

That’s all off the top of my head. What do you think? Are these essential for neuroscience? What might a cognitive theory look like without these, and how could it motivate empirical research? For me, each of these are in some way quite helpful in terms of providing a framework to interpret reaction-time, BOLD, or other cognition related data. Have I missed any?

How to tell the difference between embodied cognition and everything else

Psychscientists have a great post up proposing an acid test for genuine embodied cognition versus the all to popular “x body part alters y internal process” trope. Seriously- check it out!

http://psychsciencenotes.blogspot.co.uk/2012/03/field-spotters-guide-to-embodied.html

Embodied cognition: A field spotter’s guide

Question 1: Does the paper claim to be an example of embodied cognition?

If yes, it is probably not embodied cognition. I’ve never been entirely sure why this is, but work that is actually about embodiment rarely describes itself as such. I think it’s because embodiment is the label that’s emerged to describe work from a variety of disciplines that, at the time, wasn’t about pushing any coherent agenda, and so the work often didn’t know at the time that it was embodied cognition.

This of course is less true now embodiment is such a hot topic, so what else do I look for?

Question 2:  What is the key psychological process involved in solving the task?

Embodied cognition is, remember, the radical hypothesis that we solve tasks using resources spanning our brain, bodies and environments coupled together via perception. If the research you are reading is primarily investigating a process that doesn’t extend beyond the brain (e.g. a mental number line, or a thought about the future) then it isn’t embodiment. For example, in the leaning to the left example, the suggestion was that we estimate the magnitude of things by placing them on a mental number line, and that the way we are leaning makes different parts of that number line easier to access than others (e.g. leaning left makes the smaller numbers more accessible). The key process is the mental number line, which resides solely in the brain and is hypothesised to exist to solve a problem (estimating the magnitude of things) in a manner that doesn’t require anything other than a computing brain. This study is therefore not about embodiment.

Question 3: What is the embodied bit doing?

There’s a related question that comes up, then. In papers that aren’t actually doing embodied cognition, the body and the environment only have minor, subordinate roles. Leaning to the left merely biases our access to the mental number line; thinking about the future has a minor effect on bodily sway. The important bit is still the mental stuff – the cognitive process presumably implemented solely in the brain. If the non-neural or non-cognitive elements are simply being allowed to tweak some underlying mental process, rather than play a critical role in solving the task, it’s not embodiment.

Neuroscientists: What’s the most interesting question right now?

After 20 years of cognitive neuroscience, I sometimes feel frustrated by how little progress we’ve made. We still struggle with basic issues, like how to ask a subject if he’s in pain, or what exactly our multi-million dollar scanners measure. We lack a unifying theory linking information, psychological function, and neuroscientific measurement. We still publish all kinds of voodoo correlations, uncorrected p-values, and poorly operationalized blobfests. Yet, we’ve also laid some of the most important foundational research of our time. In twenty years we’ve mapped a mind boggling array of cognitive function. Some of these attempts at localization may not hold; others may be built on shaky functional definitions or downright poor methods. Even in the face of this uncertainty, the shear number and variety of functions that have been mapped is inspiring. Further, we’ve developed analytic tools to pave the way for an exciting new decade of multi-modal and connectomic research. Developments like resting-state fMRI, optogenetics, real time fMRI, and multi-modal imaging, make for a very exciting time to be a Cognitive Neuroscientist!

Online, things can seem a bit more pessimistic. Snarky methods blogs dedicated to revealing the worst in field tend to do well, and nearly any social-media savy neurogeek will lament the depressing state of science journalism and the brain. While I am also tired of incessantly phrenological, blob-obsessed reports (“research finds god spot in the brain, are your children safe??”) I think we share some of the blame for not communicating properly about what interests and challenges us. For me, some of the most exciting areas of research are those concerning getting straight about what our measurements mean- see the debates over noise in resting state or the neural underpinnings of the BOLD signal for example. Yet these issues are often reported as dry methodological reports, the writers themselves seemingly bored with the topic.

We need to do a better job illustrating to people just how complex and infantile our field is. The big, sexy issues are methodological in nature. They’re also phenomenological in nature. Right now neuroscience is struggling to define itself, unsure if we should be asking our subjects how they feel or anesthetizing them. I believe that if we can illustrate just how tenuous much of our research is, including the really nagging problems, the public will better appreciate seemingly nuanced issues like rest-stimulus interaction and noise-regression.

With that in mind- what are your most exciting questions, right now? What nagging thorn ails you at all steps in your research?

For me, the most interesting and nagging question is, what do people do when we ask them to do nothing? I’m talking about rest-stimulus interaction and mind wandering. There seem to be two prevailing (pro-resting state) views: that default mode network-related activity is related to subjective mind-wandering, and/or that it’s a form of global, integrative, stimulus independent neural variability. On the first view, variability in participants ability to remain on-task drive slow alterations in behavior and stimulus-evoked brain activity. On the other, innate and spontaneous rhythms synchronize large brain networks in ways that alter stimulus processing and enable memory formation. Either way, we’re left with the idea that a large portion of our supposedly well-controlled, stimulus-related brain activity is in fact predicted by uncontrolled intrinsic brain activity. Perhaps even defined by it! When you consider that all this is contingent on the intrinsic activity being real brain activity and not some kind of vascular or astrocyte-driven artifact, every research paradigm becomes a question of rest-stimulus interaction!

So neuroscientists, what keeps you up at night?

Surely, God loves the .06 (blob) nearly as much as the .05.

Image Credit: Dan Goldstein

“We are not interested in the logic itself, nor will we argue for replacing the .05 alpha with another level of alpha, but at this point in our discussion we only wish to emphasize that dichotomous significance testing has no ontological basis. That is, we want to underscore that, surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a fairly continuous function of the magnitude of p?”

Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.

This colorful quote came to mind while discussing significance testing procedures with colleagues over lunch. In Cognitive Neuroscience, with it’s enormous boon of obfuscated data, it seems we are so often met with these kinds of seemingly absurd, yet important statistical decisions. Should one correct p-values over the lifetime, as often suggested by our resident methodology expert? I love this suggestion; imagine an academia where the fossilized experts (no offense experts) are tossed aside for the newest and greenest researchers whose pool of p-values remains untapped!

Really though, just how many a priori anatomical hypothesis should one have sealed up in envelopes? As one colleague joked, it seems advantageous to keep a drawer full of wild speculations sealed away in case one’s whole-brain analysis fails to yield results. Of course we must observe and follow best scientific and statistical procedures to their maximum, but in truth a researcher often finds themselves at these obscure impasses, thousands of dollars in scanning funding spent, trying to decide whether or not they predicted a given region’s involvement. In these circumstances, it has even been argued that there is a certain ethical need to explore one’s data and not merely throw away all non-hypothesis fitting findings. While I do not support this claim, I believe it is worth considering. And further, I believe that a vast majority of the field, from the top institutions to the most obscure, often dip into these murky ethical realms.

This is one area I hope “data-driven” science, as in the Human Genome and Human Connectome projects, can succeed. It also points to a desperate need for publishing reform; surely what matters is not how many blobs fall on one side of an arbitrary distinction, but rather a full and accurate depiction of one’s data and it’s implications. In a perfect world, we would not need to obscure the truth hidden in these massive datasets while we hunt for sufficiently low p-values.

Rather we should publish a clear record, showing exactly what was done, what correlated with what, and also where significance and non-significance lie. Perhaps we might one day dream of combing through such datasets, actually explaining what drove the .06’s vs the .05’s. For now however, we must be careful not to look at our uncorrected statistical maps; for that way surely voodoo lie! And that is perhaps the greatest puzzle of all; two datasets, all things being equal. In one case the researcher writes down on paper, “blobs A, B, and C I shall see” and conducts significant ROI analyses on these regions. In the other he first examines the uncorrected map, notices blobs A, B, and C, and then conducts a region of interest analysis. In both cases, the results and data are the same. And yet one is classic statistical voodoo– double dipping- and the other perfectly valid hypothesis testing. It seems thus that our truth criterion lay not only with our statistics, but also in some way, in the epistemological ether.

Of course, it’s really more of a pragmatic distinction than an ontological one. The voodoo distinction serves not to delineate true from false results but rather to discourage researchers from engaging in risky practices that could inflate the risk of false-positives. All-in-all, I agree with Dorothy Bishop: we need to stop chasing the novel, typically spurious and begin to share and investigate our data in ways that create lasting, informative truths. The brain is simply too complex and expensive an object of study to let these practices build into an inevitable file-drawer of doom. It infuriates me how frustratingly obtuse many published studies are, even in top journals, regarding the precise methods and analysis that went into the paper. Wouldn’t we all rather share our data, and help explain it cohesively? I dread the coming collision between the undoubtably monolithic iceberg of unpublished negative findings, spurious positive findings, and our most trusted brain mapping paradigms.

The 2011 Mind & Life Summer Research Institute: Are Monks Better at Introspection?

As I’m sitting in the JFK airport waiting for my flight to Iceland, I can’t help but let my mind wander over the curious events of this year’s summer research institute (SRI). The Mind & Life Institute, an organization dedicated to the integration and development of what they’ve dubbed “contemplative science”, holds the SRI each summer to bring together clinicians, neuroscientists, scholars, and contemplatives (mostly monks) in a format that is half conference and half meditation retreat. The summer research institute is always a ton of fun, and a great place to further one’s understanding of Buddhism & meditation while sharing valuable research insights.

I was lucky enough to receive a Varela award for my work in meta-cognition and mental training and so this was my second year attending. I chose to take a slightly different approach from my first visit, when I basically followed the program and did whatever the M&L thought was the best use of my time. This meant lots of meditation- more than two hours per day not including the whole-day, silent “mini-retreat”. While I still practiced this year, I felt less obliged to do the full program, and I’m glad I took this route as it provided me a somewhat more detached, almost outsider view of the spectacle that is the Mind & Life SRI.

When I say spectacle, it’s important to understand how unconventional of a conference setting the SRI really is. Each year dozens of ambitious neuroscientists and clinicians meet with dozens of Buddhist monks and western “mindfulness” instructors. The initial feeling is one of severe culture clash; ambitious young scholars who can hardly wait to mention their Ivy League affiliations meet with the austere and almost ascetic approach of traditional Buddhist philosophy. In some cases it almost feels like a race to “out-mindful” one another, as folks put on a great show of piety in order to fit into the mood of the event. It can be a bit jarring to oscillate between the supposed tranquility and selflessness of mindfulness with the unabashed ambition of these highly talented and motivated folk- at least until things settle down a bit.

Nonetheless, the overall atmosphere of the SRI is one of serenity and scholarship. It’s an extremely fun, stimulating event, rich with amazingly talented yoga and meditation instructors and attended by the top researchers within the field. What follows is thus not meant as any kind of attack on the overall mission of the M&L. Indeed, I’m more than grateful to the institute for carrying on at least some form of Francisco Varela’s vision for cognitive science, and of course for supporting my own meditation research. With that being said, we turn to the question at hand: are monks objectively better at introspection? The answer for nearly everyone at the SRI appear to be “yes”, regardless of the scarcity of data suggesting this to be the case.

Enactivism and Francisco Varela

Francisco Varela, founder of EnactivismBefore I can really get into this issue, I need to briefly review what exactly “enactivism” is and how it relates to the SRI. The Mind & Life institute was co-founded by Francisco Varela, a Chilean biologist and neuroscientist who is often credited with the birth and success of embodied and enactive cognitive science. Varela had a profound impact on scientists, philosophers, and cognitive scientists and is a central influence in my own theoretical work. Varela’s essential thesis was outlined in his book “The Embodied Mind”, in which Varela, Thompson, and Rosch, attempted to outline a new paradigm for the study of mind. In the book, Varela et al rely on examples from cross-cultural psychology, continental phenomenology, Buddhism, and cognitive science to argue that cognition (and mind) is essentially an embodied, enactive phenomenon. The book has since spawned a generation of researchers dedicated in some way to the idea that cognition is not essentially, or at least foundationally, computational and representational in form.

I don’t here intend to get into the roots of what enactivism is; for the present it suffices to say that enactivism as envisioned by Varela involved a dedication to the “middle way” in which idealism-objectivism duality is collapsed in favor of a dynamical non-representational account of cognition and the world. I very much favor this view and try to use it productively in my own research. Varela argued throughout his life that cognition was not essentially an internal, info-processing kind of phenomenon, but rather an emergent and intricately interwoven entity that arose from our history of structural coupling with the world.  He further argued that cognitive science needed to develop a first-person methodology if it was to fully explain the rich panorama of human consciousness.

A simpler way to put this is to say that Varela argued persuasively that minds are not computers “parachuted into an objective world” and that cognition is not about sucking up impoverished information for representation in a subjective format. While Varela invoked much of Buddhist ontology, including concepts of “emptiness” and “inter-relatedness”, to develop his account continental phenomenologists like Heidegger and Merleau-Ponty also heavily inspired his vision of 4th wave cognitive science.  At the SRI there is little mention of this; most scholars are unaware of the continental literature or that phenomenology is not equal to introspection. Indeed I had to cringe when one to-be-unnamed young scientist declared a particular spinal pathway to be “the central pathway for embodiment”.

This is a stark misunderstanding of just what embodiment means, and I would argue renders it a relatively trivial add-on to the information processing paradigm- something most enactivists would like to strongly resist. I politely pointed the gentleman to the example work of Ulric Neisser, who argued for the ecological embodied self, in which the structure of the face is said to pre-structure human experience in particular ways, i.e. we typically experience ourselves as moving through the world toward a central fovea-centered point. Embodiment is an external, or pre-noetic structuring of the mind; it follows no nervous pathway but rather structures the possibilities of the nervous system and mind. I hope he follows that reference down the rabbit hole of the full possibilities of embodiment- the least of which is body-as-extra-module.

Still, I certainly couldn’t blame this particular scientist for his mis-understanding; nearly everyone at the SRI is totally unfamiliar with externalist/phenomenal perspectives, which is a sad state of affairs for a generation of scientists being supported by grants in Varela’s name. Regardless of Varela’s vision for cognitive science, his thesis regarding introspectionism is certainly running strong: first-person methodologies are the hot topic of the SRI, and nearly everyone agreed that by studying contemplative practitioners’ subjective reports, we’d gain some special insight into the mind.  Bracketing whether introspection is what Varela really meant by neurophenomenology (I don’t think it is- phenomenology is not introspection) we are brought to the central question: are Buddhist practitioners expert introspectionists?

Expertise and Introspectionism

Expert introspectionists?Varela certainly believed this to some degree. It’s not entirely clear to me that the bulk of Varela’s work summates to this maxim, but it’s at least certainly true that in papers such as his seminal “Neurophenomenology: a methodological remedy to the hard problem?” he argued that a careful first-person methodology could reap great benefits in this arena. Varela later followed up this theoretical thesis with his now well-known experiment conducted with then PhD student and my current mentor Antoine Lutz.

While I won’t reproduce the details of this experiment at length here, Lutz and Varela demonstrated that it was in fact possible to inform and constrain electrophysiological measurements through the collection and systemization of first-person reports. It’s worth noting here that the participants in this experiment were every day folks, not meditation practitioners, and that Lutz & Varela developed a special method to integrate the reports rather than taking them simply at face value. In fact, while Varela did often suggest that we might through careful contemplation and collaboration with the Buddhist tradition refine first person methodologies and gain insight into the “hard-problem”, he never did complete these experiments with practitioners, a fact that can likely be attributed to his pre-mature death at the hand of aggressive hepatitis.

Regardless of Varela’s own work, it’s fascinating to me that at today’s SRI, if there is one thing nearly everyone seems to explicitly agree on, it’s that meditation practitioners have some kind of privileged access to experience. I can’t count how many discussions seemed to simply assume the truth of this, regardless of the fact that almost no empirical research has demonstrated any kind of increased meta-cognitive capacity or accuracy in adept contemplatives.

While Antoine and I are in fact running experiments dedicated to answering this question, the fact remains that this research is largely exploratory and without strong empirical leads to work from. While I do believe that some level of meditation practice can provide greater reliability and accuracy in meta-cognitive reports, I don’t see any reason to value the reports of contemplative practitioners above and beyond those of any other particular niche group. If I want to know what it’s like to experience baseball, I’m probably going to ask some professional baseball players and not a Buddhist monk. At several points during the SRI I tried to express just this sentiment; that studying Buddhist monks gives us a greater insight into what-it-is-like to be a monk and not much else. I’m not sure if I succeeded, but I’d like to think I planted a few seeds of doubt.

There are several reasons for this. First, I part with Varela where he assumes that the Buddhist tradition and/or “Buddhist Psychology” have particularly valuable insights (for example, emptiness) that can’t be gleaned from western approaches. It might, but I don’t buy into the idea that the Buddhist tradition is its own kind of scientific approach to the mind; it’s not- it’s religion. For me the middle way means a lifelong commitment to a kind of meta-physical agnosticism, and I refuse to believe that any human tradition has a vast advantage over another. This was never more apparent than during a particularly controversial talk by John Dunne, a Harvard contemplative scholar, whose keynote was dedicated to getting scientists like myself to go beyond the traditional texts and veridical reports of practitioners and to instead engage in what he called “trialogue” in order to discover “what it is practitioners are really doing”. At the end of his talk one of the Dalai Lama’s lead monks actually took great offense, scolding John for “misleading the youth with his western academic approach”. The entire debacle was a perfect case-in-point demonstration of John’s talk; one cannot simply take the word of highly religious practitioners as some kind of veridical statement about the world.

This isn’t to say that we can’t learn a great deal about experience, and the mind, through meditation and careful introspection. I think at an early level it’s enough to just sit with ones breath and suspend beliefs about what exactly experience is. I do believe that in our modern lives; we spend precious little time with the body and our minds, simply observing what arises in a non-partial way.  I agree with Sogyal Rinpoche that we are at times overly dis-embodied and away from ourselves. Yet this practice isn’t unique to Buddhism; the phenomenological reduction comes from Husserl and is a practice developed throughout continental phenomenology. I do think that Buddhism has developed some particularly interesting techniques to aid this process, such as Vipassana and compassion-meditation, that can and will shed insights for the cognitive scientist interested in experience, and I hope that my own work will demonstrate as much.

But this is a very different claim from the one that says monastic Buddhists have a particularly special access to experience. At the end of the day I’m going to hedge my bets with the critical, empirical, and dialectical approach of cognitive science. In fact, I think there may be good reasons to suspect that high-level practitioners are inappropriate participants for “neurophenomenology”. Take for example, the excellent and controversial talk given this year by Willoughby Britton, in which she described how contemplative science had been too quick to sweep under the rug a vast array of negative “side-effects” of advanced practice. These effects included hallucination, de-personalization, pain, and extreme terror. This makes a good deal of sense; advanced meditation practice is less impartial phenomenology and more a rigorous ritualized mental practice embedded in a strong religious context. I believe that across cultures many religions share techniques, often utilizing rhythmic breathing, body postures, and intense belief priming to engender an almost psychedelic state in the practitioner.

What does this mean for cognitive science and enactivism? First, it means we need to respect cultural boundaries and not rush to put one cultural practice on top of the explanatory totem pole. This doesn’t mean cognitive scientists shouldn’t be paying attention to experience, or even practicing and studying meditation, but we have to be careful not to ignore the normativity inherent in any ritualized culture. Embracing this basic realization takes seriously individual and cultural differences in consciousness, something I’ve argued for and believe is essential for the future of 4th wave cognitive science. Neurophenomenology, among other things, should be about recognizing and describing the normativity in our own practices, not importing those of another culture wholesale. I think that this is in line with much of what Varela wrote, and luckily, the tools to do just this are richly provided by the continental phenomenological tradition.

I believe that by carefully bracketing meta-physical and normative concepts, and investigating the vast multitude of phenomenal experience in its full multi-cultural variety, we can begin to shed light on the mind-brain relationship in a meaningful and not strictly reductive fashion. Indeed, in answering the question “are monks expert introspectionists” I think we should carefully question the normative thesis underlying that hypothesis- what exactly constitutes “good” experiential reports? Perhaps by taking a long view on Buddhism and cognitive science, we can begin to truly take the middle way to experience, where we view all experiential reports as equally valid statements regarding some kind of subjective state. The question then becomes primarily longitudinal, i.e. do experiential reports demonstrate a kind of stability or consistency over time, how do trends in experiential reports relate to neural traits and states, and how do these phenomena interact with the particular cultural practices within which they are embedded. For me, this is the central contribution of enactive cognitive science and the best way forward for neurophenomenology.

Disclaimer: I am in no way suggesting enactivists cannot or should not study advanced buddhism if that is what they find interesting and useful. I of course realize that the M&L SRI is a very particular kind of meeting, and that many enactive cognitive scientists can and do work along the lines I am suggesting. My claim is regarding best practices for the core of 4th wave cognitive science, not the fringe. I greatly value the work done by the M&L and found the SRI to be an amazingly fruitful experience.