A gut, heart, and breath check: what matters most for cognition?

Last week I asked twitter a question that comes up frequently in our lab: what visceral rhythm exerts the most impact on cognition [1]? Now, this is a question which is deliberately vague in nature. The goal is to force a ‘gut check’ on which visceral systems that we, as neuroscientists, might reasonably expect to bias cognition. What do I mean by cognition? Literally any aspect of information processing. Perception, memory, learning, emotion, pain, you name it. Some of you jokingly pointed out that if any of these rhythms cease entirely (e.g., in death), cognition will surely be impacted. So to get a bit closer to an experimental design which might build on these intuitions, I offered the following guidelines:

I.e., what I largely had in mind was the kinds of psychophysiology experiments that are currently in vogue – presenting stimuli during different phases of a particular visceral cycle, and then interpreting differences in reaction time, accuracy, subjective response, or whatever as evidence of ‘brain-body interaction’. Of course, these are far from the only ways in which we can measure the influence of the body on the brain, and I intentionally left the question as open as possible. I wanted to know: what are your ‘gut feelings’, about gut feelings? And the twitter neuroscience community answered the call!

poll_responses.jpg

Here you can see that overall, respiration was a clear winner, and was also my own choice. Surprisingly, gastric rhythms just beat out cardiac, at about 29 vs 27.5%. More on this later. Roughly 380/1099 respondent’s felt that, all else being equal, respiration was likely to produce the most influence on cognition. And I do agree; although the literature is heavily biased in terms of numbers of papers towards the cardiac domain, intuitively respiration feels like a better candidate for the title of heavy-weight visceral rhythm champion.

Why is that? At least a few reasons spring to mind. For one thing, the depth and frequency of respiration directly modulates heart-rate variability, through basic physiological reflexes such as the respiratory sinus arrhythmia. At a more basic level still, respiration is of course responsible for gas exchange and pH regulation, conditioning the blood whose transport around the body depends upon the heart. That is to say; the heart is ultimately the chauffeur for the homeostatic function of the lungs, always second fiddle.

In the central nervous system both systems matter in a big way of course, and are closely tied to one another. A  lesion to the brain-stem that results in cardiac or respiratory arrest is equally deadly, and the basic homeostatic clocks that control these rhythms are tightly interwoven for good reason.

But here, one can reasonably argue that these low-level phenomenon don’t really speak to the heart of the question, which is about (‘higher-order’) cognition. What can we say about that? Neuroviscerally speaking, in my opinion the respiratory rhythm has the potential to influence a much broader swath of brain areas. Respiration reaches the brain through multiple pathways: bypassing the limbic system altogether to target the prefrontal cortex via the innervation of the nasal septum, through basic somatosensory entrainment via the mechanical action of the lungs and chest wall, and through the same vagally mediated pathways as those carrying baroreceptive information from the heart. In fact, the low level influence of respiration on the heart means that the brain can likely read-out or predict heart-rate at least partially from respiration alone, independently of any afferent baro-receptor information (that is of course, speculation on my part). I think Sophie Betka’s response captures this intuition beautifully:

All of which is to say, that respiration affords many potential avenues by which to bias, influence, or modulate cognition, broadly speaking. Some of you asked whether my question was more aimed at “the largest possible effect size” or the “most generalized effect size”. This is a really important question, which again, I simply intended to collapse across in my poll, whose main purpose was to generate thought and discussion. An it really is a critical issue for future research; we might predict that cardiac or gastric signals would modulate very strong effects in very specific domains (e.g., fear or hunger), but that respiration might effect weak to moderate effects in a wide variety of domains. Delineating this difference will be crucial for future basic neuroscience, and even more so if these kinds of effects are to be of clinical significance.

Suffice to say, I was pleased to see a clear majority agree that respiration is the wave of the future (my puns on the other hand, are likely growing tiresome). But I was surprised to see the strong showing of the gastric rhythm, relative to cardiac. My internal ranking was definitely leaning towards 1) respiration, 2) cardiac, 3) gastric. My thinking here was; sure, the brain may track the muscular contractions of the stomach and GI tract, but is this really that relevant for any cognitive domain other than eating behavior? To be fair, I think many respondents probably did not consider the more restricted case of, for example, presenting different trials or stimuli at gastric contraction vs expansion, but interpreted the question more liberally in terms of hormone excretion, digestion, and possibly even gut micobiome or enteric-nervous linked effects. And that is totally fair I think; taken as a whole, the rhythm of the stomach and gut is likely to exert a huge amount of primary and secondary effects on cognition. This issue was touched on quite nicely by my collaborator Paul Fletcher:

I think that is absolutely right; to a degree, how we answer the question depends exactly on which timescales and contexts we are interested in. It again raises the question of: what kind of effects are we most interested in? Really strong but specific, or weaker, more general effects? Intuitively, being hungry definitely modulates the gastric rhythm, and in turn we’ve all felt the grim specter of ‘hanger’ causing us to lash out at the nearest street food vendor.

Forgetting these speedy bodily ‘rabbits’ all together, what about those most slow of bodily rhythms [3]. Commenters Andrea Poli, Anil Seth, and others pointed out that at the very slowest timescales, hormonal and circadian rhythms can regulate all others, and the brain besides:

Indeed, if we view these rhythms as a temporal hierarchy (as some authors have argued), then it is reasonable to assume that causality should in general flow from the slowest, most general rhythms ‘upward’ to the fastest, most specific rhythms (i.e., cardiac, adrenergic, and neural). And there is definitely some truth to that; the circadian rhythm causes huge changes in baseline arousal, heart-rate variability, and even core bodily temperature. In the end, it’s probably best to view each of these smaller waves as inscribed within the deeper, slower waves; their individual shape may vary depending on context, but their global amplitude comes from the depths below. And of course, here the gloomy ghost of circular causality raises its incoherent head; because these faster rhythms can in turn regulate the slower, in a never ceasing allostatic push-me-pull-you affair.

All that considered, is is perhaps unsurprising then that in this totally unscientific poll at least, the gastric rhythm rose to challenge the all-mighty cardiac [2]. It seems clear that the preponderance of cardiac-brain studies is more an artifact of ease of study, rather than a deep seated engagement with their predominance. And ultimately, if we want to understand how the body shapes the mind, we will need to take precisely the multi-scale view espoused by many commenters.

A final thought on what kinds of effects might matter most: of all of these rhythms, only one is directly amenable to conscious control. That is of course, the breath. And it is intriguing also that across many cultural practices – elite sportsmanship, martial arts, meditation, and marksmanship for example – the regulation of the breath is taught as a core technique for altering awareness, attention, and mood. I think for this reason, respiration is among the most interesting of all possible rhythms. It sits at that rare precipice, teetering between fully automatic and fully conscious. Our ability to become conscious of the breath can be a curse and a gift; many of you may feel a slight anxiety as you read this article, becoming ever so slightly more aware of your own rising and falling breath [4]. From the point of view of neuropsychiatry, I can’t help but feel like whatever the effects of respiration are, this amenability to control, and the possibility to regulate all other rhythms in turn, makes understanding the breath an absolutely critical focus for clinical translation.

Footnotes:
[1] Closely related to the question I am mostly commonly asked in talks: what effect size do you expect in general for cardiac/respiratory/gastric-brain interaction?

[2] I do apologize for the misleading usage of a poop emoji to signify the gastric rhythm. Although poop is certainly a causal product of the gastric rhythm, I did not mean to imply a stomach full of it.

[3] Regrettably, all of these rhythms would have been subsided in the general response category of ‘other’. This likely greatly suppressed their response rates, but I think we can all forgive this limitation of a deeply unscientific intuition pump poll.

[4] And that is something which seems to uniquely define the body in general; usually absent, potentially unpleasant (or very pleasant) when present. Phenomenologists call this the ‘transparency’ of the body-as-subject.

Read More »

The Embodied Computation Group Awarded Lundbeck and AIAS Fellowships! 🇩🇰 🧠

The Embodied Computation Group

It brings me great pleasure to officially announce that I have been awarded joint starting fellowships from the Lundbeck Foundation and Aarhus Institute of Advanced Studies (AIAS)! These fellowships will enable me to launch my own research lab, the Embodied Computation Group (ECG), which will be based at both Aarhus University and Cambridge Psychiatry. This is an incredibly exciting development – obviously for my own sanity as an early career researcher, but also more importantly for the budding field of embodied neuroscience and computational psychiatry!

As an Associate Professor at Aarhus University and a visiting Senior Research Fellow at Cambridge Psychiatry, I will develop the ECG into a multi-disciplinary research group investigating the computational mechanisms of brain-body interaction, and their disruption in a variety of health-harming and psychiatric disorders. The ECG will initially focus on the Visceral Mind Project – an unprecedented chance to map the neural mechanisms through which…

View original post 1,510 more words

#Raincloudplots – the preprint!

Today we’re extremely excited to bring you our latest project – the raincloud plots preprint! Working on this project has been an absolute pleasure – I’ve learned so much about open science and data visualization. Better yet I can now tick ‘write a paper through twitter DMs’ off my bucket list!

For those of you who missed it, a few months ago I wrote a blog post showing off some plots I’d hacked together in ggplot. To my surprise, these ‘raincloud plots’ generated a great deal of excitement, and people from a variety of disciplines started asking if there was a paper they could cite. Things really started to take off when Davide Poggiali and Tom Rhys Marshall unveiled their own raincloudplot functions in Python and Matlab. Together with Davide and Tom, I reached out to Rogier Kievit and Kirstie Whitaker, two shining stars of the open neuroscience community, and asked if they would be interested in helping us put together a multi-platform tutorial so we could help as many people as possible ‘make it rain’. Together with this all-star team, I’m very happy to say that version 1.0 of the Rainclouds Paper is now published at PeerJ!

Raincloud plots: a multiplatform tool for robust data visualization

https://peerj.com/preprints/27137v1

The paper is accompanied by a GitHub repository where you can find custom functions to create your own raincloud plots in R, Python, and Matlab. Thanks to the magic of Binder and Rmarkdown, you can even run the R and Python tutorials right in your browser! You can also follow these tutorials within the paper itself.

https://github.com/RainCloudPlots/RainCloudPlots#read-the-preprint

Now, at this junction it is important to emphasize this is version 1.0 of this project. We have a long list of revisions to make for our next preprint – and we invite you to contribute your own tweaks, modules, and excellent plots at our github repo! You can find instructions on making your own contributions here:

https://github.com/RainCloudPlots/RainCloudPlots/blob/master/CONTRIBUTING.md

We look forward to your comments, feedback, and contributions to the project! For example, we’re considering adding an empirical aspect to the paper before submitting it for peer review. One idea we’ve had is to try to run an online experiment in a large sample of scientists, to probe whether raincloud plots improve the guesstimation of statistical differences and uncertainty. Do get in touch if that is something you would be interested in contributing to!

Of course, this project wouldn’t be possible without the amazing contributions of the many developers and scientists who make such amazing tools like ggplot, matplotlib, seaborn, and many more possible. As we point out in the paper, raincloud plots themselves are just one extension of a rich history of better plotting alternatives. We hope you’ll find our code and tutorials useful so you can continue to make the most kick-ass, robust data visualizations possible!

Some thoughts on writing ‘Bayes Glaze’ theoretical papers.

[This was a twitter navel-gazing thread someone ‘unrolled’. I was really surprised that it read basically like a blog post, so I thought why not post it here directly! I’ve made a few edits for readability. So consider this an experiment in micro-blogging ….]

In the past few years, I’ve started and stopped a paper on metacognition, self-inference, and expected precision about a dozen times. I just feel conflicted about the nature of these papers and want to make a very circumspect argument without too much hype. As many of you frequently note, we have way too many ‘Bayes glaze’ review papers in glam mags making a bunch of claims for which there is no clear relationship to data or actual computational mechanisms.

It has gotten so bad, I sometimes see papers or talks where it feels like they took totally unrelated concepts and plastered “prediction” or “prediction error” in random places. This is unfortunate, and it’s largely driven by the fact that these shallow reviews generate a bonkers amount of citations. It is a land rush to publish the same story over and over again just changing the topic labels, planting a flag in an area and then publishing some quasi-related empirical stuff. I know people are excited about predictive processing, and I totally share that. And there is really excellent theoretical work being done, and I guess flag planting in some cases is not totally indefensible for early career researchers. But there is also a lot of cynical stuff, and I worry that this speaks so much more loudly than the good, careful stuff. The danger here is that we’re going to cause a blowback and be ultimately seen as ‘cargo cult computationalists’, which will drag all of our research down both good and otherwise.

In the past my theoretical papers in this area have been super dense and frankly a bit confusing in some aspects. I just wanted to try and really, really do due-diligence and not overstate my case. But I do have some very specific theoretical proposals that I think are unique. I’m not sure why i’m sharing all this, but I think because it is always useful to remind people that we feel imposter syndrome and conflict at all career levels. And I want to try and be more transparent in my own thinking – I feel that the earlier I get feedback the better. And these papers have been living in my head like demons, simultaneously too ashamed to be written and jealous at everyone else getting on with their sexy high impact review papers.

Specifically, I have some fairly straightforward ideas about how interoception and neural gain (precision) inter-relate, and also have a model i’ve been working on for years about how metacognition relates to expected precision. If you’ve seen any of my recent talks, you get the gist of these ideas.

Now, I’m *really* going to force myself to finally write these. I don’t really care where they are published, it doesn’t need to be a glamour review journal (as many have suggested I should aim for). Although at my career stage, I guess that is the thing to do. I think I will probably preprint them on my blog, or at least muse openly about them here, although i’m not sure if this is a great idea for theoretical work.

Further, I will try and hold to three key promises:

  1. Keep it simple. One key hypothesis/proposal per paper. Nothing grandiose.
  2. Specific, falsifiable predictions about behavioral & neurophysiological phenomenon, with no (minimal?) hand-waving
  3. Consider alternative models/views – it really gets my goat when someone slaps ‘prediction error’ on their otherwise straightforward story and then acts like it’s the only game in town. ‘Predictive processing’ tells you almost *nothing* about specific computational architectures, neurobiological mechanisms, or general process theories. I’ve said this until i’m blue in the face: there can be many, many competing models of any phenomenon, all of which utilize prediction errors.

These papers *won’t* be explicitly computational – although we have that work under preparation as well – but will just try to make a single key point that I want to build on. If I achieve my other three aims, it should be reasonably straight-forward to build computational models from these papers.

That is the idea. Now I need to go lock myself in a cabin-in-the-woods for a few weeks and finally get these papers off my plate. Otherwise these Bayesian demons are just gonna keep screaming.

So, where to submit? Don’t say Frontiers…

For whom the bell tolls? A potential death-knell for the heartbeat counting task.

Interoception – the perception of signals arising from the visceral body – is a hot topic in cognitive neuroscience and psychology. And rightly so; a growing body of evidence suggests that brain-body interaction is closely linked to mood1, memory2, and mental health3. In terms of basic science, many theorists argue that the integration of bodily and exteroceptive (i.e., visual) signals underlies the genesis of a subjective, embodied point of view4–6.  However, noninvasively measuring (and even better, manipulating) interoception is inherently difficult. Unlike visual or tactile awareness, where an experimenter can carefully control stimulus strength and detection difficulty, interoceptive signals are inherently spontaneous, uncontrolled processes. As such, prevailing methods for measuring interoception typically involve subjects attending to their heartbeats and reporting how many heartbeats they counted in a given interval. This is known as the heartbeat counting task (or Schandry task, named after its creator)7. Now a new study has cast extreme doubt on what this task actually measures.

The study, published by Zamariola et al in Biological Psychology8, begins by detailing what we already largely know: the heartbeat counting task is inherently problematic. For example, the task is easily confounded by prior knowledge or beliefs about one’s average heart rate. Zamariola et al write:

“Since the original task instruction requires participants to estimate the number of heartbeats, individuals may provide an answer based on beliefs without actually attempting to perceive their heartbeats. Consistent with this view, one study (Windmann, Schonecke, Fröhlig, & Maldener, 1999) showed that changing the heart rate in patients with cardiac pacemaker, setting them to low (50 beats per minute, bpm), medium (75 bpm), or high (110 bpm) heart rate, did not influence their reported number of heartbeats. This suggests that these patients performed the task by relying on previous knowledge instead of perception of their bodily states.”

This raises the question of what exactly the task is measuring. The essence of heartbeat counting tasks is that one must silently count the number of perceived heartbeats over multiple temporal intervals. From this, an “interoceptive accuracy score” (IAcc) is computed using the formula:

1/3 ∑ (1–(|actual heartbeats – reported heartbeats|)/actual heartbeats)

This formula is meant to render over-counting (counting heartbeats that don’t occur) and under-counting (missing actual heartbeats) equivalent, in a score bounded by 0-1. Zamariola et al argue that these scores lack fundamental construct validity on the basis of four core arguments. I summarize each argument below; see the full article for the detailed explanation:

  1. [interoceptive] abilities involved in not missing true heartbeats may differ from abilities involved in not over-interpreting heartbeats-unrelated signals. [this assumption] would be questioned by evidence showing that IAcc scores largely depend on one error type only.
  2. IAcc scores should validly distinguish between respondents. If IAcc scores reflect people’s ability to accurately perceive their inner states, a correlation between actual and reported heartbeats should be observed, and this correlation should linearly increase with higher IAcc scores (i.e., better IAcc scorers should better map actual and reported heartbeats).
  3. a valid measure of interoception accuracy should not be structurally tied to heart condition. This is because heart condition (i.e. actual heartbeats) is not inherent to the definition of the interoceptive accuracy construct. In other words, it is essential for construct validity that people’s accuracy at perceiving their inner life is not structurally bound to their cardiac condition.”
  4. The counting interval [i.e., 10, 15, 30 seconds] should not impact IAcc; a wide range of scores are in fact used and these should be independent of the resultant measure.

Zamariola et al then go on to show that in a sample of 572 healthy individuals (386 female), each of these assumptions are strongly violated; IAcc scores depend largely on under-reporting heartbeats (Fig. 1), that the correlation of actual and perceived heartbeats is extremely low and higher at average than higher IAcc levels (Fig. 2), that IAcc is systematically increased as slower heart rates (Fig. 3), and that longer time intervals lead to substantially worse IAccc (not shown):

Fig. 1

  1. iACC scores are mainly driven by under-reporting; “less than 5% of participants showed overestimation… Hence, IAcc scores essentially inform us of how (un)willing participants are to report they perceived a heartbeat”

Fig. 2

2. Low overall correlation (grey-dashed line) of heartbeats counted and actual heartbeats (r = 0.16, 2.56% shared variance). Further, the correlation varied non-linearly across bins of iACC scores, which in the author’s words demonstrates that “IAcc scores fail to validly differentiate individuals in their ability to accurately perceive their inner states within the top 60% IAcc scorers.”

Fig. 3

3. iACC scores depend negatively on the number of actual heartbeats, suggesting that individuals with lower overall heart-rate will be erroneously characterized as ‘good interoceptive accuracy’.

Overall, the authors draw the conclusion the heartbeat counting task is nigh-useless, lacking both face and construct validity. What should we measure instead? The authors offer that, if one can have very many trials, than the mere correlation of counted and actual heartbeats may be a (slightly) better measure. However, given the massive bias present in under-reporting heartbeats, they suggest that the task measures only the willingness to report a heartbeat at all. As such, they highlight the need for true psychophysical tasks which can distinguish participant reporting bias (i.e., criterion) from the true sensitivity to heart beats. A potentially robust alternative may be the multi-interval heartbeat discrimination task9, in which a method of constant stimuli is used to compare heartbeats to multiple intervals of temporal stimuli. However, this task is substantially more difficult to administer; it requires some knowledge of psychophysics and as much as 45 minutes to complete. As many (myself included) are interesting in measuring interoception in sensitive patient populations, it’s not a given that this task will be widely adopted.

I’m curious what my readers think. For me, this paper proffers a final nail in the coffin of heartbeat counting tasks. Nearly every interoception researcher I’ve spoken to has expressed concerns about what the task actually measures. Worse, large intrasubject variance and the fact that many subjects perform incredibly poorly on the task seems to undermine the idea that it is anything like a measure of cardiac perception. At best, it seems to be a measure of interoceptive attention and report-bias. The study by Zamariola and colleagues is well-powered, sensibly conducted, and seems to provide unambiguous evidence against the task’s basic validity. Heart-beart counting; the bell tolls for thee.

References

  1. Foster, J. A. & McVey Neufeld, K.-A. Gut–brain axis: how the microbiome influences anxiety and depression. Trends Neurosci. 36, 305–312 (2013).
  2. Zelano, C. et al. Nasal Respiration Entrains Human Limbic Oscillations and Modulates Cognitive Function. J. Neurosci. 36, 12448–12467 (2016).
  3. Khalsa, S. S. et al. Interoception and Mental Health: a Roadmap. Biol. Psychiatry Cogn. Neurosci. Neuroimaging (2017). doi:10.1016/j.bpsc.2017.12.004
  4. Park, H.-D. & Tallon-Baudry, C. The neural subjective frame: from bodily signals to perceptual consciousness. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130208 (2014).
  5. Seth, A. K. Interoceptive inference, emotion, and the embodied self. Trends Cogn. Sci. 17, 565–573 (2013).
  6. Barrett, L. F. & Simmons, W. K. Interoceptive predictions in the brain. Nat. Rev. Neurosci. 16, 419–429 (2015).
  7. Schandry, R., Sparrer, B. & Weitkunat, R. From the heart to the brain: A study of heartbeat contingent scalp potentials. Int. J. Neurosci. 30, 261–275 (1986).
  8. Zamariola, G., Maurage, P., Luminet, O. & Corneille, O. Interoceptive Accuracy Scores from the Heartbeat Counting Task are Problematic: Evidence from Simple Bivariate Correlations. Biol. Psychol. doi:10.1016/j.biopsycho.2018.06.006
  9. Brener, J. & Ring, C. Towards a psychophysics of interoceptive processes: the measurement of heartbeat detection. Philos. Trans. R. Soc. B Biol. Sci. 371, 20160015 (2016).

 

Bon Voyage – Neuroconscience goes to Cambridge! A Retrospective and Thank You.

Today is a big day – I’m moving to Cambridge! After nearly five years of living in London, it’s finally time for me to move on to green pastures. It’s hard to believe really. I first came to London nearly fifteen years ago on a high school trip. It was love at first sight. I knew that somehow, someday, I would live here. As a Bermudian immigrant living in Florida, to me London was the centre of the world. The bustling big city, but unlike many in the US, one rich in a thousand years of history and culture. And although I didn’t know it then, my eventual career in neuroscience would draw me towards this great city like a moth to the flame.

I still remember like it was yesterday; my first internship in a neuroimaging lab at the University of Central Florida. Graduate students pouring over some strange software called “SPM” to produce colorful brain maps, accompanied by an arcane tome of the same name. Although I knew already then that I wanted to do cognitive neuroimaging, SPM and the eponymously named Functional Imaging Laboratory (FIL) were just names on a book at the time. Later however, when I joined the Interacting Minds Group at Aarhus University, SPM and the FIL were everywhere. Most of our PIs had undertook their formative training in the centre; we even had our own Friday project presentations modelled exactly on the original item. Every single desk had a copy of the SPM textbook. To me London, the FIL, and the brilliant people working there represented an unchallenged mecca of neuroimaging methods. I set my sights on a postdoc in Queen Square.

Yet, even halfway through my PhD, I wasn’t sure how or even if I’d manage to realize my dream. During my PhD I visited the Institute of Cognitive Neuroscience (ICN) and applied to several labs unsuccessfully. All the excitement and energy of the London Neuroscience scene seemed only there to tease me; as if to say an upstart boy from Alabama vis-à-vis Bermuda could only taste what was on offer, but never possess it. Finally, something broke; Geraint Rees, then ICN director, took an interest in my work on embodiment and perception, and invited me to apply for jaw-dropping 4-years of postdoc, based between the FIL and ICN. I was elated to apply, and even more so to get it. Breathlessly I told my friends and family back home – this was it, my big break in the big city. And I wasn’t just headed to London, but to that seemingly mythic centre from which so much of my interests – brain imaging, predictive processing, and clever experimental design– seemed to stem. As far as I was concerned, this was my big chance on Broadway, and I told anyone who would listen.

Of course, in the end reality is never quite the same as expectation. I’ll never quite forget my shock at my first day on the job. During my induction, I was giddy as Marcia Bennett led me around and the centre, explaining various protocol and introducing me to other fellows. But then she took me down to the basement office; a sterile, open plan room with more than twenty desks, no real windows, and at least 10 or more active researchers all working in close proximity. As she left me at my desk, I worried that perhaps I’d made a mistake. My offices in Denmark had been luxurious, often empty, and with a stunning view of the city. Now I was to sit in a basement, crammed besides what seemed then like a factory line of postdocs, every day for four years? On top of that, I quickly learned that commuting one hour every day on the tube is far from fun, once the novelty wears off. Nevertheless, I set to my work. And in time, as I adjusted to living in a big city and working in a big centre, I came to find that London, UCL, and Queen Square would capture my mind and my heart in ways I could have never imagined.

Now as I look back, it’s difficult to believe so much has come to pass. I’ve lived in my dream city for the past five years; I’ve gone to bohemian art shows, sipped with brewers at hipster beer festivals, visited ornate science societies, and grew a massive beard. I’ve ended up at crazy nightclubs at early hours; I’ve witnessed major developments in cognitive neuroscience, and I started a neuroscience sport climbing club. Like an extremophile sucking off some nutrient rich deep-sea volcanic vent, I’ve inhaled every bit of exciting new methods, ideas, and energy that comes through the amazing and overwhelming world that is London Neuroscience. I’ve made friends, and I’ve made mistakes. I’ve had major victories, and also major losses. And through it all, I’ve been supported by some of the most brilliant and helpful friends, colleagues, and mentors anyone could ever possibly ask for.

Of course, London is many things to many people, but perhaps it is home to the fewest of all. So many times I asked myself; am I a Londoner? Will this city ever be my home, or is it just one great side-show on the great journey of life. In the end, I still don’t know the answer. I’m a nomad, and I’ve lived in London for as long as I’ve lived anywhere else in the world. It will always be a part of me, and I will always be grateful for the gifts she has given me. I leave London a better person, and a better neuroscientist. And who knows… maybe someday I’ll return (… although hopefully at a higher pay-grade!).

Where do you go after living in your dream city and working at your dream job? To Cambridge! I’m on my way today to a new flat, a new job, a new mentor, and a new centre. If my mother could see me now, i’m sure she’d never believe her baby boy made it so far away from sweet home Alabama. And London won’t be far away; indeed, for some months I’ll be returning each week to carry on collaborations as I transition between jobs. And as Karl says; once you’ve been at the FIL, you never really leave. The FIL is many things, but most of all it is a family of people bonded by a desire to do really kick-ass cognitive neuroscience. So as I go on to Cambridge and whatever lies beyond, I know I will always carry with me my training, my friends, and my amazing colleagues.

And the future is bright indeed! This post is already too long; but let it suffice as a teaser when I say that my upcoming projects will be some of the most exciting work I have ever done. I’m leaving the FIL armed with a shiny new model of my own make and a suite of experimental questions that I am eager to answer. Together with Professor Paul Fletcher and the amazing colleagues of the new MRC/Wellcome Translational Research Facility and Cambridge Psychiatry Department, I will have access to unique patient groups not found anywhere else in the world, the latest methods in pharmacological neuroimaging, and a team of amazing collaborators accelerating our research. By applying the methods and models I’ve developed in London in this setting, I will have the chance to push our understanding of interoception, metacognition, and embodied self-inference to newfound heights. If London was my Broadway, then Cambridge shall be the launchpad for my world tour.

So stay tuned! If I learned one thing in the past half-decade, it’s that I need to return to blogging. Now that I’ve finally recaptured my voice, and am headed forward on another amazing research adventure, it’s more important than ever to communicate to you, my beloved hive mind, what exciting new developments are on the horizon. In the near future I will outline some of the exciting new directions my research will be taking at Cambridge, where we will use a variety of methods to probe brain-body interaction and it’s disruption in psychiatric and health-harming behaviours.

And now for some much needed thanks. Thanks to my beautiful and brilliant wife Francesca – whose incredible research inspires me daily – for her unwavering and unconditional love and support in all things. To my Grandmother who saved me so many times and is the inspiration for so much of my work. Thanks Mom – I wish you could see this. Thanks Dad for teaching me to work hard for my dreams and to always lead the way. Thanks to Corey and Marissa who are the best brother and sister anyone could ask for. To Jonathan and Alex, the two best men on the planet – without our Skype sessions i’d have never survived this long! To my amazing mentors – of whom I have been fortunate to have so many who have helped me so dearly; Shaun Gallagher, Andreas Roepstorff, Antoine Lutz, Uta & Chris Frith, Geraint Rees, Jonathan Smallwood, and Karl Friston. To my students Maria, Darya, Calum, and Thomas who did such an amazing job making our science a reality. To my awesome amazing friends, colleagues, & collaborators who challenge me, help me, and help make every piece of science that comes across our desks as brilliant, rigorous, and innovative as possible – thanks especially to Tobias Hauser, Peter Zeidman, Sam Schwarzkopf, and Francesco Rigoli for letting me distract you with endless questions – you guys make my research rock in so many ways. Thanks to John Greenwood, Joel Winston, Fred Dick, Martina Callaghan, John Ashburner, Gareth Barnes, Steve Fleming, Ray Dolan, Tristan Bekinschtein, Becky Lawson, Rimona Weil, and all of my amazing collaborators – here is to many future projects together! To the entire basement and first floor clan who put up with my outbursts and antics, you guys are the best and you made this time amazing. Thanks to Brianna, Sofia, Antonio, Eva, Elizabeth, Dan, Frederike, Phillip, Alex, Wen Wen, and everyone from the ICN who made Queen Square the coolest place in town, and who showed me how fun castles can be. To the amazing administration and scientific staff of the FIL, who truly make it the best place to work in neuroscience bar none. And finally – thanks to YOU, my readers and digital colleagues, for your support, your energy, and your enthusiasm, which make @Neuroconscience possible and help me push myself into ever bolder frontiers.

To Cambridge, and beyond!

Micah Galen Allen/@neuroconscience

Introducing Raincloud Plots!

UPDATE: RAINCLOUD PLOTS NOW PUBLISHED IN WELLCOME OPEN RESEARCH! SEE LINKS BELOW

Manuscript:

https://wellcomeopenresearch.org/articles/4-63/v1

Github:

https://github.com/RainCloudPlots/RainCloudPlots

Like many of you, I love ggplot2. The ability to make beautiful, informative plots quickly is a major boon to my research workflow. One plot I’ve been particularly fond of recently is the ‘violin + boxplot + jittered dataset’ combo, which nicely provides an overview of the raw data, the probability distribution, and  ‘statistical inference at a glance’ via medians and confidence intervals. However, as pointed out in this tweet by Naomi Caselli, violin plots mirror the data density in a totally uninteresting/uninformative way, simply repeating the same exact information for the sake of visual aesthetic. In my quest for a better plot, which can show differences between groups or conditions while providing maximal statistical information, I recently came upon the ‘split violin’ plot. This nicely deletes the mirrored density, freeing up room for additional plots such as boxplots or raw data. Inspired by a particularly beautiful combination of split-half violins and dotplots, I set out to make something a little less confusing. In particular, dot plots are rather complex and don’t precisely mirror what is shown in the split violin, possibly leading to more confusion than clarity. Introducing ‘elephant’ and ‘raincloud’ plots, which combine the best of all worlds! Read on for the full code-recipe plus some hacks to make them look pretty (apologies for poor markup formatting, haven’t yet managed to get markdown to play nicely with wordpress)!

RainCloudPlotDemo
A ‘raincloud’ plot, which combines boxplots, raw jittered data, and a split-half violin.

Let’s get to the data – with extra thanks to @MarcusMunafo, who shared this dataset on the University of Bristol’s open science repository.

First, we’ll set up the needed libraries and import the data:

library(readr)
library(tidyr)
library(ggplot2)
library(Hmisc)
library(plyr)
library(RColorBrewer)
library(reshape2)

source(“https://gist.githubusercontent.com/benmarwick/2a1bb0133ff568cbe28d/raw/fb53bd97121f7f9ce947837ef1a4c65a73bffb3f/geom_flat_violin.R”)

my_data<-read.csv(url(“https://data.bris.ac.uk/datasets/112g2vkxomjoo1l26vjmvnlexj/2016.08.14_AnxietyPaper_Data%20Sheet.csv&#8221;))

head(X)

Although it doesn’t really matter for this demo, in this experiment healthy adults recruited from MTurk (n = 2006) completed a six alternative forced choice task in which they were presented with basic emotional expressions (anger, disgust, fear, happiness, sadness and surprise) and had to identify the emotion presented in the face. Outcome measures were recognition accuracy and unbiased hit rate (i.e., sensitivity).

For this demo, we’ll focus on the unbiased hitrate for anger, disgust, fear, and happiness conditions. Let’s reshape the data from wide to long format, to facilitate plotting:

library(reshape2)
my_datal <- melt(my_data, id.vars = c("Participant"), measure.vars = c("AngerUH", "DisgustUH", "FearUH", "HappyUH"), variable.name = "EmotionCondition", value.name = "Sensitivity")

head(my_datal)

Now we’re ready to start plotting. But first, lets define a theme to make pretty plots.

raincloud_theme = theme(
text = element_text(size = 10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),
axis.text = element_text(size = 14),
axis.text.x = element_text(angle = 45, vjust = 0.5),
legend.title=element_text(size=16),
legend.text=element_text(size=16),
legend.position = "right",
plot.title = element_text(lineheight=.8, face="bold", size = 16),
panel.border = element_blank(),
panel.grid.minor = element_blank(),
panel.grid.major = element_blank(),
axis.line.x = element_line(colour = 'black', size=0.5, linetype='solid'),
axis.line.y = element_line(colour = 'black', size=0.5, linetype='solid'))

Now we need to calculate some summary statistics:

lb <- function(x) mean(x) - sd(x)
ub <- function(x) mean(x) + sd(x)

sumld<- ddply(my_datal, ~EmotionCondition, summarise, mean = mean(Sensitivity), median = median(Sensitivity), lower = lb(Sensitivity), upper = ub(Sensitivity))

head(sumld)

Now we’re ready to plot! We’ll start with a ‘raincloud’ plot (thanks to Jon Roiser for the great suggestion!):

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_boxplot(width = .1, guides = FALSE, outlier.shape = NA, alpha = 0.5) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
coord_flip() +
theme_bw() +
raincloud_theme

g

I love this plot. Adding a bit of alpha transparency makes it so we can overlay boxplots over the raw, jittered data points. In one https://blogs.scientificamerican.com/literally-psyched/files/2012/03/ElephantInSnake.jpegplot we get basically everything we need: eyeballed statistical inference, assessment of data distributions (useful to check assumptions), and the raw data itself showing outliers and underlying patterns. We can also flip the plots for an ‘Elephant’ or ‘Little Prince’ Plot! So named for the resemblance to an elephant being eating by a boa-constrictor:

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_boxplot(width = .1, guides = FALSE, outlier.shape = NA, alpha = 0.5) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
# coord_flip() +
theme_bw() +
raincloud_theme

g

ElephantPlotDemo.jpeg

For those who prefer a more classical approach, we can replace the boxplot with a mean and confidence interval using the summary statistics we calculated above. Here we’re using +/- 1 standard deviation, but you could also plot the SEM or 95% CI:

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_point(data = sumld, aes(x = EmotionCondition, y = mean), position = position_nudge(x = 0.3), size = 2.5) +
geom_errorbar(data = sumld, aes(ymin = lower, ymax = upper, y = mean), position = position_nudge(x = 0.3), width = 0) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
theme_bw() +
raincloud_theme

g

Et voila! I find that the combination of raw data + box plots + split violin is very powerful and intuitive, and really leaves nothing to the imagination when it comes to the underlying data. Although here I used a very large dataset, I believe these would still work well for more typical samples sizes in cognitive neuroscience (i.e., N ~ 30-100), although you may want to only include the boxplot for very small samples.

I hope you find these useful, and you can be absolutely sure they will be appearing in some publications from our workbench as soon as possible! Go forth and make it rain!

Thanks very much to @naomicaselli, @jonclayden, @patilindrajeets, & @jonroiser for their help and inspiration with these plots!

2017 – my year so far

Nothing like a long-awaited vacation. My wife and I take our yearly two-week vacation in September. Although it is a bit late, we enjoy the timing as most summer spots are still warm but have few tourists and low prices. This year we are in Francesca’s hometown of Marostica, Italy, for not one but TWO weddings. It will be a true test of my self-discipline to not gain several kilos on this trip!

That being said, phew am I ready for some vacation. The later date means that as the summer stretches on, my work ethic flags a bit and I become prone to take more and more staycation days. I really believe this mid-summer lethargy is some kind of holdover from grade-school conditioning; the body just cannot forget the joy that is the first day of summer vacation. And I do believe that to avoid becoming repetitive, creative, passionate work requires long periods of rest and enjoyment. And so, I’ve had a fun summer of travelling, building collaborations, and putting pieces in place for new projects and applications.

Overall, this has been a busy, if somewhat odd year. Most of the first half was taken up by my first ever fellowship-level grant application. In 2016, I had just found my publishing stride, seemingly clearing one item of after another off my desk. Publishing finally felt ‘normal’, like a part of the job with manageable known-knowns and known-unknowns. I even enjoyed it. And of course, there was a much needed, although transient period of enjoying the feeling of accomplishment. It is nice to have a stable of new papers across a spread of journals and think, ‘well, that is dinner for the next few years secured’. It was in the midst of this semi-tranquility that I realized the end of my long and luxurious post-doc was looming near. Suddenly I would have to do something entirely new; the complexities and uncertainties of applying for start-up funds for my first lab made me again feel like an anxious graduate student.

Of course, my first attempt took ages, was rife with errors, and was overly complex in almost every way. Grant writing, like everything, is really a learning process; I wish I had applied for more smaller grants in my postdoc, just to better prepare me for the process itself. Writing papers does not really train you for this task, which is much more ‘sales’-oriented. And it doesn’t feel very productive. Here you are pouring out an amount of work which could easily produce 2-3 new papers in itself, into an endeavor which is overwhelmingly likely to be a failure. It feels a bit like taking all of your hard-won momentum and dumping it into a bin labelled ‘unlikely hopes and dreams’.

But in the end, it really was worth it, at least for me. Even if the outcome itself is uncertain, just the process of collecting all of your achievements under one banner will strengthen and clarify your view of yourself as a professional. More importantly, having to think in a highly unconstrained, big picture way will force you to find the strongest themes within your research. Nevertheless, the process of comparing yourself to the best of one’s peers is quite emotionally draining, and doubly so when one is worried about losing steam on their research.

And then you submit it. 2-3 months of writing, years of planning and dreaming, all boiled down to one button-click submission. And then you try to go back to your research agenda for the year; it’s important to not to let the hot irons cool too much while you are out begging for seeking funding. This is where I am now; awaiting a massive decision, while still trying to forge on with my ongoing research. And we’ve got some terribly exciting new projects and collaborations in the works this year, ranging from my first ever registered report, to big-data fueled investigations of brain-body connectomics and a new computational neuroimaging task, which we’ll be scanning in both MEG and fMRI. I’m really looking forward to the new challenges, techniques, and discoveries this year will bring.

And that is the thought that keeps me going! I know that, even if I am unsuccessful in my quest for funding, I will always find a way to pursue these questions.  Because it is what I do, and what I live for.

Next post: the year to come! I’ll give some teasers about the different projects we are currently working on. Also, in the next weeks I’ll be overhauling this website to give more information about my research, and also plan to start blogging my backlog of recent publications.

The future of neuroconscience and me

The past two years have been a crazy ride. We’ve seen a seeming upheaval of the political world that has shook most of us to our core. Along the way, our social networks have also changed. Sometimes it feels as if we are all reeling along. When I started this blog, I had the perhaps naive view that I was initiating a nexus of some kind of hivemind. Indeed, my research has benefited massively from this blog, and from all of your amazing input and interaction. For a while, all was good. My following grew, I blogged frequently, and it was all quite rewarding. But, when the network started to go mad with current events, perhaps I went a bit mad too. Somewhere along the way the plot was lost. Was neuroconscience my research, a livestream of it, or a hub for my particular flavor of research interests? I must admit that somewhere along the way I lost the plot. The result, I think, has been an increasingly less coherent stream, and my blogposts sputtering down to nothing. I need to find my voice again, and to do that, I think I need to begin splitting out the various streams of my public outreach, research, and personal-is-political daily life. I’m not exactly sure how best to do that, but that didn’t stop me from starting neuroconscience in the first place.

The first steps have been underway for sometime. I’ve already redesigned my this website into a neuroconscience ‘blog’, and a more standard ‘here I am’ academic webfront. My blog will continue to be a collection of rants, thoughts, and musings on all things cogneuro and beyond. My twitter stream @neuroconscience shall become a more content-focused, curated stream of my favorite links, media, and research. Maintaining a high signal-to-noise ratio will be a paramount goal. As usual, this will follow the whims of my ADHD-driven interest; my personal thoughts, political rants, and similar will now be posted exclusively from my new ‘personal’ account @micahgallen. Here you can get my unfiltered voice, all the random musings, and especially loads of navel gazing. In case you are curious, the g stands for ‘galen’, my middle name. I know some of you thought better to keep it all in one, but I believe many who once came to my feed for useful and interesting content are being turned away by the volume and lack of organization in my tweets. My personal account will collect the chaos; neuroconscience will return to the ‘good ‘ol days’ when it was a highly curated stream. And who knows, someday maybe we’ll be joined by a lab account…

Enjoy!

Micah/@neuroconscience