The Embodied Computation Group Awarded Lundbeck and AIAS Fellowships! 🇩🇰 🧠

The Embodied Computation Group

It brings me great pleasure to officially announce that I have been awarded joint starting fellowships from the Lundbeck Foundation and Aarhus Institute of Advanced Studies (AIAS)! These fellowships will enable me to launch my own research lab, the Embodied Computation Group (ECG), which will be based at both Aarhus University and Cambridge Psychiatry. This is an incredibly exciting development – obviously for my own sanity as an early career researcher, but also more importantly for the budding field of embodied neuroscience and computational psychiatry!

As an Associate Professor at Aarhus University and a visiting Senior Research Fellow at Cambridge Psychiatry, I will develop the ECG into a multi-disciplinary research group investigating the computational mechanisms of brain-body interaction, and their disruption in a variety of health-harming and psychiatric disorders. The ECG will initially focus on the Visceral Mind Project – an unprecedented chance to map the neural mechanisms through which…

View original post 1,510 more words

#Raincloudplots – the preprint!

Today we’re extremely excited to bring you our latest project – the raincloud plots preprint! Working on this project has been an absolute pleasure – I’ve learned so much about open science and data visualization. Better yet I can now tick ‘write a paper through twitter DMs’ off my bucket list!

For those of you who missed it, a few months ago I wrote a blog post showing off some plots I’d hacked together in ggplot. To my surprise, these ‘raincloud plots’ generated a great deal of excitement, and people from a variety of disciplines started asking if there was a paper they could cite. Things really started to take off when Davide Poggiali and Tom Rhys Marshall unveiled their own raincloudplot functions in Python and Matlab. Together with Davide and Tom, I reached out to Rogier Kievit and Kirstie Whitaker, two shining stars of the open neuroscience community, and asked if they would be interested in helping us put together a multi-platform tutorial so we could help as many people as possible ‘make it rain’. Together with this all-star team, I’m very happy to say that version 1.0 of the Rainclouds Paper is now published at PeerJ!

Raincloud plots: a multiplatform tool for robust data visualization

https://peerj.com/preprints/27137v1

The paper is accompanied by a GitHub repository where you can find custom functions to create your own raincloud plots in R, Python, and Matlab. Thanks to the magic of Binder and Rmarkdown, you can even run the R and Python tutorials right in your browser! You can also follow these tutorials within the paper itself.

https://github.com/RainCloudPlots/RainCloudPlots#read-the-preprint

Now, at this junction it is important to emphasize this is version 1.0 of this project. We have a long list of revisions to make for our next preprint – and we invite you to contribute your own tweaks, modules, and excellent plots at our github repo! You can find instructions on making your own contributions here:

https://github.com/RainCloudPlots/RainCloudPlots/blob/master/CONTRIBUTING.md

We look forward to your comments, feedback, and contributions to the project! For example, we’re considering adding an empirical aspect to the paper before submitting it for peer review. One idea we’ve had is to try to run an online experiment in a large sample of scientists, to probe whether raincloud plots improve the guesstimation of statistical differences and uncertainty. Do get in touch if that is something you would be interested in contributing to!

Of course, this project wouldn’t be possible without the amazing contributions of the many developers and scientists who make such amazing tools like ggplot, matplotlib, seaborn, and many more possible. As we point out in the paper, raincloud plots themselves are just one extension of a rich history of better plotting alternatives. We hope you’ll find our code and tutorials useful so you can continue to make the most kick-ass, robust data visualizations possible!

Some thoughts on writing ‘Bayes Glaze’ theoretical papers.

[This was a twitter navel-gazing thread someone ‘unrolled’. I was really surprised that it read basically like a blog post, so I thought why not post it here directly! I’ve made a few edits for readability. So consider this an experiment in micro-blogging ….]

In the past few years, I’ve started and stopped a paper on metacognition, self-inference, and expected precision about a dozen times. I just feel conflicted about the nature of these papers and want to make a very circumspect argument without too much hype. As many of you frequently note, we have way too many ‘Bayes glaze’ review papers in glam mags making a bunch of claims for which there is no clear relationship to data or actual computational mechanisms.

It has gotten so bad, I sometimes see papers or talks where it feels like they took totally unrelated concepts and plastered “prediction” or “prediction error” in random places. This is unfortunate, and it’s largely driven by the fact that these shallow reviews generate a bonkers amount of citations. It is a land rush to publish the same story over and over again just changing the topic labels, planting a flag in an area and then publishing some quasi-related empirical stuff. I know people are excited about predictive processing, and I totally share that. And there is really excellent theoretical work being done, and I guess flag planting in some cases is not totally indefensible for early career researchers. But there is also a lot of cynical stuff, and I worry that this speaks so much more loudly than the good, careful stuff. The danger here is that we’re going to cause a blowback and be ultimately seen as ‘cargo cult computationalists’, which will drag all of our research down both good and otherwise.

In the past my theoretical papers in this area have been super dense and frankly a bit confusing in some aspects. I just wanted to try and really, really do due-diligence and not overstate my case. But I do have some very specific theoretical proposals that I think are unique. I’m not sure why i’m sharing all this, but I think because it is always useful to remind people that we feel imposter syndrome and conflict at all career levels. And I want to try and be more transparent in my own thinking – I feel that the earlier I get feedback the better. And these papers have been living in my head like demons, simultaneously too ashamed to be written and jealous at everyone else getting on with their sexy high impact review papers.

Specifically, I have some fairly straightforward ideas about how interoception and neural gain (precision) inter-relate, and also have a model i’ve been working on for years about how metacognition relates to expected precision. If you’ve seen any of my recent talks, you get the gist of these ideas.

Now, I’m *really* going to force myself to finally write these. I don’t really care where they are published, it doesn’t need to be a glamour review journal (as many have suggested I should aim for). Although at my career stage, I guess that is the thing to do. I think I will probably preprint them on my blog, or at least muse openly about them here, although i’m not sure if this is a great idea for theoretical work.

Further, I will try and hold to three key promises:

  1. Keep it simple. One key hypothesis/proposal per paper. Nothing grandiose.
  2. Specific, falsifiable predictions about behavioral & neurophysiological phenomenon, with no (minimal?) hand-waving
  3. Consider alternative models/views – it really gets my goat when someone slaps ‘prediction error’ on their otherwise straightforward story and then acts like it’s the only game in town. ‘Predictive processing’ tells you almost *nothing* about specific computational architectures, neurobiological mechanisms, or general process theories. I’ve said this until i’m blue in the face: there can be many, many competing models of any phenomenon, all of which utilize prediction errors.

These papers *won’t* be explicitly computational – although we have that work under preparation as well – but will just try to make a single key point that I want to build on. If I achieve my other three aims, it should be reasonably straight-forward to build computational models from these papers.

That is the idea. Now I need to go lock myself in a cabin-in-the-woods for a few weeks and finally get these papers off my plate. Otherwise these Bayesian demons are just gonna keep screaming.

So, where to submit? Don’t say Frontiers…

For whom the bell tolls? A potential death-knell for the heartbeat counting task.

Interoception – the perception of signals arising from the visceral body – is a hot topic in cognitive neuroscience and psychology. And rightly so; a growing body of evidence suggests that brain-body interaction is closely linked to mood1, memory2, and mental health3. In terms of basic science, many theorists argue that the integration of bodily and exteroceptive (i.e., visual) signals underlies the genesis of a subjective, embodied point of view4–6.  However, noninvasively measuring (and even better, manipulating) interoception is inherently difficult. Unlike visual or tactile awareness, where an experimenter can carefully control stimulus strength and detection difficulty, interoceptive signals are inherently spontaneous, uncontrolled processes. As such, prevailing methods for measuring interoception typically involve subjects attending to their heartbeats and reporting how many heartbeats they counted in a given interval. This is known as the heartbeat counting task (or Schandry task, named after its creator)7. Now a new study has cast extreme doubt on what this task actually measures.

The study, published by Zamariola et al in Biological Psychology8, begins by detailing what we already largely know: the heartbeat counting task is inherently problematic. For example, the task is easily confounded by prior knowledge or beliefs about one’s average heart rate. Zamariola et al write:

“Since the original task instruction requires participants to estimate the number of heartbeats, individuals may provide an answer based on beliefs without actually attempting to perceive their heartbeats. Consistent with this view, one study (Windmann, Schonecke, Fröhlig, & Maldener, 1999) showed that changing the heart rate in patients with cardiac pacemaker, setting them to low (50 beats per minute, bpm), medium (75 bpm), or high (110 bpm) heart rate, did not influence their reported number of heartbeats. This suggests that these patients performed the task by relying on previous knowledge instead of perception of their bodily states.”

This raises the question of what exactly the task is measuring. The essence of heartbeat counting tasks is that one must silently count the number of perceived heartbeats over multiple temporal intervals. From this, an “interoceptive accuracy score” (IAcc) is computed using the formula:

1/3 ∑ (1–(|actual heartbeats – reported heartbeats|)/actual heartbeats)

This formula is meant to render over-counting (counting heartbeats that don’t occur) and under-counting (missing actual heartbeats) equivalent, in a score bounded by 0-1. Zamariola et al argue that these scores lack fundamental construct validity on the basis of four core arguments. I summarize each argument below; see the full article for the detailed explanation:

  1. [interoceptive] abilities involved in not missing true heartbeats may differ from abilities involved in not over-interpreting heartbeats-unrelated signals. [this assumption] would be questioned by evidence showing that IAcc scores largely depend on one error type only.
  2. IAcc scores should validly distinguish between respondents. If IAcc scores reflect people’s ability to accurately perceive their inner states, a correlation between actual and reported heartbeats should be observed, and this correlation should linearly increase with higher IAcc scores (i.e., better IAcc scorers should better map actual and reported heartbeats).
  3. a valid measure of interoception accuracy should not be structurally tied to heart condition. This is because heart condition (i.e. actual heartbeats) is not inherent to the definition of the interoceptive accuracy construct. In other words, it is essential for construct validity that people’s accuracy at perceiving their inner life is not structurally bound to their cardiac condition.”
  4. The counting interval [i.e., 10, 15, 30 seconds] should not impact IAcc; a wide range of scores are in fact used and these should be independent of the resultant measure.

Zamariola et al then go on to show that in a sample of 572 healthy individuals (386 female), each of these assumptions are strongly violated; IAcc scores depend largely on under-reporting heartbeats (Fig. 1), that the correlation of actual and perceived heartbeats is extremely low and higher at average than higher IAcc levels (Fig. 2), that IAcc is systematically increased as slower heart rates (Fig. 3), and that longer time intervals lead to substantially worse IAccc (not shown):

Fig. 1

  1. iACC scores are mainly driven by under-reporting; “less than 5% of participants showed overestimation… Hence, IAcc scores essentially inform us of how (un)willing participants are to report they perceived a heartbeat”

Fig. 2

2. Low overall correlation (grey-dashed line) of heartbeats counted and actual heartbeats (r = 0.16, 2.56% shared variance). Further, the correlation varied non-linearly across bins of iACC scores, which in the author’s words demonstrates that “IAcc scores fail to validly differentiate individuals in their ability to accurately perceive their inner states within the top 60% IAcc scorers.”

Fig. 3

3. iACC scores depend negatively on the number of actual heartbeats, suggesting that individuals with lower overall heart-rate will be erroneously characterized as ‘good interoceptive accuracy’.

Overall, the authors draw the conclusion the heartbeat counting task is nigh-useless, lacking both face and construct validity. What should we measure instead? The authors offer that, if one can have very many trials, than the mere correlation of counted and actual heartbeats may be a (slightly) better measure. However, given the massive bias present in under-reporting heartbeats, they suggest that the task measures only the willingness to report a heartbeat at all. As such, they highlight the need for true psychophysical tasks which can distinguish participant reporting bias (i.e., criterion) from the true sensitivity to heart beats. A potentially robust alternative may be the multi-interval heartbeat discrimination task9, in which a method of constant stimuli is used to compare heartbeats to multiple intervals of temporal stimuli. However, this task is substantially more difficult to administer; it requires some knowledge of psychophysics and as much as 45 minutes to complete. As many (myself included) are interesting in measuring interoception in sensitive patient populations, it’s not a given that this task will be widely adopted.

I’m curious what my readers think. For me, this paper proffers a final nail in the coffin of heartbeat counting tasks. Nearly every interoception researcher I’ve spoken to has expressed concerns about what the task actually measures. Worse, large intrasubject variance and the fact that many subjects perform incredibly poorly on the task seems to undermine the idea that it is anything like a measure of cardiac perception. At best, it seems to be a measure of interoceptive attention and report-bias. The study by Zamariola and colleagues is well-powered, sensibly conducted, and seems to provide unambiguous evidence against the task’s basic validity. Heart-beart counting; the bell tolls for thee.

References

  1. Foster, J. A. & McVey Neufeld, K.-A. Gut–brain axis: how the microbiome influences anxiety and depression. Trends Neurosci. 36, 305–312 (2013).
  2. Zelano, C. et al. Nasal Respiration Entrains Human Limbic Oscillations and Modulates Cognitive Function. J. Neurosci. 36, 12448–12467 (2016).
  3. Khalsa, S. S. et al. Interoception and Mental Health: a Roadmap. Biol. Psychiatry Cogn. Neurosci. Neuroimaging (2017). doi:10.1016/j.bpsc.2017.12.004
  4. Park, H.-D. & Tallon-Baudry, C. The neural subjective frame: from bodily signals to perceptual consciousness. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130208 (2014).
  5. Seth, A. K. Interoceptive inference, emotion, and the embodied self. Trends Cogn. Sci. 17, 565–573 (2013).
  6. Barrett, L. F. & Simmons, W. K. Interoceptive predictions in the brain. Nat. Rev. Neurosci. 16, 419–429 (2015).
  7. Schandry, R., Sparrer, B. & Weitkunat, R. From the heart to the brain: A study of heartbeat contingent scalp potentials. Int. J. Neurosci. 30, 261–275 (1986).
  8. Zamariola, G., Maurage, P., Luminet, O. & Corneille, O. Interoceptive Accuracy Scores from the Heartbeat Counting Task are Problematic: Evidence from Simple Bivariate Correlations. Biol. Psychol. doi:10.1016/j.biopsycho.2018.06.006
  9. Brener, J. & Ring, C. Towards a psychophysics of interoceptive processes: the measurement of heartbeat detection. Philos. Trans. R. Soc. B Biol. Sci. 371, 20160015 (2016).

 

Bon Voyage – Neuroconscience goes to Cambridge! A Retrospective and Thank You.

Today is a big day – I’m moving to Cambridge! After nearly five years of living in London, it’s finally time for me to move on to green pastures. It’s hard to believe really. I first came to London nearly fifteen years ago on a high school trip. It was love at first sight. I knew that somehow, someday, I would live here. As a Bermudian immigrant living in Florida, to me London was the centre of the world. The bustling big city, but unlike many in the US, one rich in a thousand years of history and culture. And although I didn’t know it then, my eventual career in neuroscience would draw me towards this great city like a moth to the flame.

I still remember like it was yesterday; my first internship in a neuroimaging lab at the University of Central Florida. Graduate students pouring over some strange software called “SPM” to produce colorful brain maps, accompanied by an arcane tome of the same name. Although I knew already then that I wanted to do cognitive neuroimaging, SPM and the eponymously named Functional Imaging Laboratory (FIL) were just names on a book at the time. Later however, when I joined the Interacting Minds Group at Aarhus University, SPM and the FIL were everywhere. Most of our PIs had undertook their formative training in the centre; we even had our own Friday project presentations modelled exactly on the original item. Every single desk had a copy of the SPM textbook. To me London, the FIL, and the brilliant people working there represented an unchallenged mecca of neuroimaging methods. I set my sights on a postdoc in Queen Square.

Yet, even halfway through my PhD, I wasn’t sure how or even if I’d manage to realize my dream. During my PhD I visited the Institute of Cognitive Neuroscience (ICN) and applied to several labs unsuccessfully. All the excitement and energy of the London Neuroscience scene seemed only there to tease me; as if to say an upstart boy from Alabama vis-à-vis Bermuda could only taste what was on offer, but never possess it. Finally, something broke; Geraint Rees, then ICN director, took an interest in my work on embodiment and perception, and invited me to apply for jaw-dropping 4-years of postdoc, based between the FIL and ICN. I was elated to apply, and even more so to get it. Breathlessly I told my friends and family back home – this was it, my big break in the big city. And I wasn’t just headed to London, but to that seemingly mythic centre from which so much of my interests – brain imaging, predictive processing, and clever experimental design– seemed to stem. As far as I was concerned, this was my big chance on Broadway, and I told anyone who would listen.

Of course, in the end reality is never quite the same as expectation. I’ll never quite forget my shock at my first day on the job. During my induction, I was giddy as Marcia Bennett led me around and the centre, explaining various protocol and introducing me to other fellows. But then she took me down to the basement office; a sterile, open plan room with more than twenty desks, no real windows, and at least 10 or more active researchers all working in close proximity. As she left me at my desk, I worried that perhaps I’d made a mistake. My offices in Denmark had been luxurious, often empty, and with a stunning view of the city. Now I was to sit in a basement, crammed besides what seemed then like a factory line of postdocs, every day for four years? On top of that, I quickly learned that commuting one hour every day on the tube is far from fun, once the novelty wears off. Nevertheless, I set to my work. And in time, as I adjusted to living in a big city and working in a big centre, I came to find that London, UCL, and Queen Square would capture my mind and my heart in ways I could have never imagined.

Now as I look back, it’s difficult to believe so much has come to pass. I’ve lived in my dream city for the past five years; I’ve gone to bohemian art shows, sipped with brewers at hipster beer festivals, visited ornate science societies, and grew a massive beard. I’ve ended up at crazy nightclubs at early hours; I’ve witnessed major developments in cognitive neuroscience, and I started a neuroscience sport climbing club. Like an extremophile sucking off some nutrient rich deep-sea volcanic vent, I’ve inhaled every bit of exciting new methods, ideas, and energy that comes through the amazing and overwhelming world that is London Neuroscience. I’ve made friends, and I’ve made mistakes. I’ve had major victories, and also major losses. And through it all, I’ve been supported by some of the most brilliant and helpful friends, colleagues, and mentors anyone could ever possibly ask for.

Of course, London is many things to many people, but perhaps it is home to the fewest of all. So many times I asked myself; am I a Londoner? Will this city ever be my home, or is it just one great side-show on the great journey of life. In the end, I still don’t know the answer. I’m a nomad, and I’ve lived in London for as long as I’ve lived anywhere else in the world. It will always be a part of me, and I will always be grateful for the gifts she has given me. I leave London a better person, and a better neuroscientist. And who knows… maybe someday I’ll return (… although hopefully at a higher pay-grade!).

Where do you go after living in your dream city and working at your dream job? To Cambridge! I’m on my way today to a new flat, a new job, a new mentor, and a new centre. If my mother could see me now, i’m sure she’d never believe her baby boy made it so far away from sweet home Alabama. And London won’t be far away; indeed, for some months I’ll be returning each week to carry on collaborations as I transition between jobs. And as Karl says; once you’ve been at the FIL, you never really leave. The FIL is many things, but most of all it is a family of people bonded by a desire to do really kick-ass cognitive neuroscience. So as I go on to Cambridge and whatever lies beyond, I know I will always carry with me my training, my friends, and my amazing colleagues.

And the future is bright indeed! This post is already too long; but let it suffice as a teaser when I say that my upcoming projects will be some of the most exciting work I have ever done. I’m leaving the FIL armed with a shiny new model of my own make and a suite of experimental questions that I am eager to answer. Together with Professor Paul Fletcher and the amazing colleagues of the new MRC/Wellcome Translational Research Facility and Cambridge Psychiatry Department, I will have access to unique patient groups not found anywhere else in the world, the latest methods in pharmacological neuroimaging, and a team of amazing collaborators accelerating our research. By applying the methods and models I’ve developed in London in this setting, I will have the chance to push our understanding of interoception, metacognition, and embodied self-inference to newfound heights. If London was my Broadway, then Cambridge shall be the launchpad for my world tour.

So stay tuned! If I learned one thing in the past half-decade, it’s that I need to return to blogging. Now that I’ve finally recaptured my voice, and am headed forward on another amazing research adventure, it’s more important than ever to communicate to you, my beloved hive mind, what exciting new developments are on the horizon. In the near future I will outline some of the exciting new directions my research will be taking at Cambridge, where we will use a variety of methods to probe brain-body interaction and it’s disruption in psychiatric and health-harming behaviours.

And now for some much needed thanks. Thanks to my beautiful and brilliant wife Francesca – whose incredible research inspires me daily – for her unwavering and unconditional love and support in all things. To my Grandmother who saved me so many times and is the inspiration for so much of my work. Thanks Mom – I wish you could see this. Thanks Dad for teaching me to work hard for my dreams and to always lead the way. Thanks to Corey and Marissa who are the best brother and sister anyone could ask for. To Jonathan and Alex, the two best men on the planet – without our Skype sessions i’d have never survived this long! To my amazing mentors – of whom I have been fortunate to have so many who have helped me so dearly; Shaun Gallagher, Andreas Roepstorff, Antoine Lutz, Uta & Chris Frith, Geraint Rees, Jonathan Smallwood, and Karl Friston. To my students Maria, Darya, Calum, and Thomas who did such an amazing job making our science a reality. To my awesome amazing friends, colleagues, & collaborators who challenge me, help me, and help make every piece of science that comes across our desks as brilliant, rigorous, and innovative as possible – thanks especially to Tobias Hauser, Peter Zeidman, Sam Schwarzkopf, and Francesco Rigoli for letting me distract you with endless questions – you guys make my research rock in so many ways. Thanks to John Greenwood, Joel Winston, Fred Dick, Martina Callaghan, John Ashburner, Gareth Barnes, Steve Fleming, Ray Dolan, Tristan Bekinschtein, Becky Lawson, Rimona Weil, and all of my amazing collaborators – here is to many future projects together! To the entire basement and first floor clan who put up with my outbursts and antics, you guys are the best and you made this time amazing. Thanks to Brianna, Sofia, Antonio, Eva, Elizabeth, Dan, Frederike, Phillip, Alex, Wen Wen, and everyone from the ICN who made Queen Square the coolest place in town, and who showed me how fun castles can be. To the amazing administration and scientific staff of the FIL, who truly make it the best place to work in neuroscience bar none. And finally – thanks to YOU, my readers and digital colleagues, for your support, your energy, and your enthusiasm, which make @Neuroconscience possible and help me push myself into ever bolder frontiers.

To Cambridge, and beyond!

Micah Galen Allen/@neuroconscience

Introducing Raincloud Plots!

Violin Plots
https://xkcd.com/1967/

UPDATE: NOW AVAILABLE AS A PREPRINT – INCLUDES CODE TUTORIALS FOR MATLAB, PYTHON, AND R! 

https://peerj.com/preprints/27137v1/

Like many of you, I love ggplot2. The ability to make beautiful, informative plots quickly is a major boon to my research workflow. One plot I’ve been particularly fond of recently is the ‘violin + boxplot + jittered dataset’ combo, which nicely provides an overview of the raw data, the probability distribution, and  ‘statistical inference at a glance’ via medians and confidence intervals. However, as pointed out in this tweet by Naomi Caselli, violin plots mirror the data density in a totally uninteresting/uninformative way, simply repeating the same exact information for the sake of visual aesthetic. Not to mention, for some at least they are actually a bit lewd. In my quest for a better plot, which can show differences between groups or conditions while providing maximal statistical information, I recently came upon the ‘split violin’ plot. This nicely deletes the mirrored density, freeing up room for additional plots such as boxplots or raw data. Inspired by a particularly beautiful combination of split-half violins and dotplots, I set out to make something a little less confusing. In particular, dot plots are rather complex and don’t precisely mirror what is shown in the split violin, possibly leading to more confusion than clarity. Introducing ‘elephant’ and ‘raincloud’ plots, which combine the best of all worlds! Read on for the full code-recipe plus some hacks to make them look pretty (apologies for poor markup formatting, haven’t yet managed to get markdown to play nicely with wordpress)!

RainCloudPlotDemo
A ‘raincloud’ plot, which combines boxplots, raw jittered data, and a split-half violin.

Let’s get to the data – with extra thanks to @MarcusMunafo, who shared this dataset on the University of Bristol’s open science repository.

First, we’ll set up the needed libraries and import the data:

library(readr)
library(tidyr)
library(ggplot2)
library(Hmisc)
library(plyr)
library(RColorBrewer)
library(reshape2)

source("https://gist.githubusercontent.com/benmarwick/2a1bb0133ff568cbe28d/raw/fb53bd97121f7f9ce947837ef1a4c65a73bffb3f/geom_flat_violin.R")

my_data<-read.csv(url(“https://data.bris.ac.uk/datasets/112g2vkxomjoo1l26vjmvnlexj/2016.08.14_AnxietyPaper_Data%20Sheet.csv&#8221;))

head(X)

Although it doesn’t really matter for this demo, in this experiment healthy adults recruited from MTurk (n = 2006) completed a six alternative forced choice task in which they were presented with basic emotional expressions (anger, disgust, fear, happiness, sadness and surprise) and had to identify the emotion presented in the face. Outcome measures were recognition accuracy and unbiased hit rate (i.e., sensitivity).

For this demo, we’ll focus on the unbiased hitrate for anger, disgust, fear, and happiness conditions. Let’s reshape the data from wide to long format, to facilitate plotting:

library(reshape2)
my_datal <- melt(my_data, id.vars = c("Participant"), measure.vars = c("AngerUH", "DisgustUH", "FearUH", "HappyUH"), variable.name = "EmotionCondition", value.name = "Sensitivity")

head(my_datal)

Now we’re ready to start plotting. But first, lets define a theme to make pretty plots.

raincloud_theme = theme(
text = element_text(size = 10),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),
axis.text = element_text(size = 14),
axis.text.x = element_text(angle = 45, vjust = 0.5),
legend.title=element_text(size=16),
legend.text=element_text(size=16),
legend.position = "right",
plot.title = element_text(lineheight=.8, face="bold", size = 16),
panel.border = element_blank(),
panel.grid.minor = element_blank(),
panel.grid.major = element_blank(),
axis.line.x = element_line(colour = 'black', size=0.5, linetype='solid'),
axis.line.y = element_line(colour = 'black', size=0.5, linetype='solid'))

Now we need to calculate some summary statistics:

lb <- function(x) mean(x) - sd(x)
ub <- function(x) mean(x) + sd(x)

sumld<- ddply(my_datal, ~EmotionCondition, summarise, mean = mean(Sensitivity), median = median(Sensitivity), lower = lb(Sensitivity), upper = ub(Sensitivity))

head(sumld)

Now we’re ready to plot! We’ll start with a ‘raincloud’ plot (thanks to Jon Roiser for the great suggestion!):

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_boxplot(width = .1, guides = FALSE, outlier.shape = NA, alpha = 0.5) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
coord_flip() +
theme_bw() +
raincloud_theme

g

I love this plot. Adding a bit of alpha transparency makes it so we can overlay boxplots over the raw, jittered data points. In one https://blogs.scientificamerican.com/literally-psyched/files/2012/03/ElephantInSnake.jpegplot we get basically everything we need: eyeballed statistical inference, assessment of data distributions (useful to check assumptions), and the raw data itself showing outliers and underlying patterns. We can also flip the plots for an ‘Elephant’ or ‘Little Prince’ Plot! So named for the resemblance to an elephant being eating by a boa-constrictor:

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_boxplot(width = .1, guides = FALSE, outlier.shape = NA, alpha = 0.5) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
# coord_flip() +
theme_bw() +
raincloud_theme

g

ElephantPlotDemo.jpeg

For those who prefer a more classical approach, we can replace the boxplot with a mean and confidence interval using the summary statistics we calculated above. Here we’re using +/- 1 standard deviation, but you could also plot the SEM or 95% CI:

g <- ggplot(data = my_datal, aes(y = Sensitivity, x = EmotionCondition, fill = EmotionCondition)) +
geom_flat_violin(position = position_nudge(x = .2, y = 0), alpha = .8) +
geom_point(aes(y = Sensitivity, color = EmotionCondition), position = position_jitter(width = .15), size = .5, alpha = 0.8) +
geom_point(data = sumld, aes(x = EmotionCondition, y = mean), position = position_nudge(x = 0.3), size = 2.5) +
geom_errorbar(data = sumld, aes(ymin = lower, ymax = upper, y = mean), position = position_nudge(x = 0.3), width = 0) +
expand_limits(x = 5.25) +
guides(fill = FALSE) +
guides(color = FALSE) +
scale_color_brewer(palette = "Spectral") +
scale_fill_brewer(palette = "Spectral") +
theme_bw() +
raincloud_theme

g

Et voila! I find that the combination of raw data + box plots + split violin is very powerful and intuitive, and really leaves nothing to the imagination when it comes to the underlying data. Although here I used a very large dataset, I believe these would still work well for more typical samples sizes in cognitive neuroscience (i.e., N ~ 30-100), although you may want to only include the boxplot for very small samples.

I hope you find these useful, and you can be absolutely sure they will be appearing in some publications from our workbench as soon as possible! Go forth and make it rain!

Thanks very much to @naomicaselli, @jonclayden, @patilindrajeets, & @jonroiser for their help and inspiration with these plots!

2017 – my year so far

Nothing like a long-awaited vacation. My wife and I take our yearly two-week vacation in September. Although it is a bit late, we enjoy the timing as most summer spots are still warm but have few tourists and low prices. This year we are in Francesca’s hometown of Marostica, Italy, for not one but TWO weddings. It will be a true test of my self-discipline to not gain several kilos on this trip!

That being said, phew am I ready for some vacation. The later date means that as the summer stretches on, my work ethic flags a bit and I become prone to take more and more staycation days. I really believe this mid-summer lethargy is some kind of holdover from grade-school conditioning; the body just cannot forget the joy that is the first day of summer vacation. And I do believe that to avoid becoming repetitive, creative, passionate work requires long periods of rest and enjoyment. And so, I’ve had a fun summer of travelling, building collaborations, and putting pieces in place for new projects and applications.

Overall, this has been a busy, if somewhat odd year. Most of the first half was taken up by my first ever fellowship-level grant application. In 2016, I had just found my publishing stride, seemingly clearing one item of after another off my desk. Publishing finally felt ‘normal’, like a part of the job with manageable known-knowns and known-unknowns. I even enjoyed it. And of course, there was a much needed, although transient period of enjoying the feeling of accomplishment. It is nice to have a stable of new papers across a spread of journals and think, ‘well, that is dinner for the next few years secured’. It was in the midst of this semi-tranquility that I realized the end of my long and luxurious post-doc was looming near. Suddenly I would have to do something entirely new; the complexities and uncertainties of applying for start-up funds for my first lab made me again feel like an anxious graduate student.

Of course, my first attempt took ages, was rife with errors, and was overly complex in almost every way. Grant writing, like everything, is really a learning process; I wish I had applied for more smaller grants in my postdoc, just to better prepare me for the process itself. Writing papers does not really train you for this task, which is much more ‘sales’-oriented. And it doesn’t feel very productive. Here you are pouring out an amount of work which could easily produce 2-3 new papers in itself, into an endeavor which is overwhelmingly likely to be a failure. It feels a bit like taking all of your hard-won momentum and dumping it into a bin labelled ‘unlikely hopes and dreams’.

But in the end, it really was worth it, at least for me. Even if the outcome itself is uncertain, just the process of collecting all of your achievements under one banner will strengthen and clarify your view of yourself as a professional. More importantly, having to think in a highly unconstrained, big picture way will force you to find the strongest themes within your research. Nevertheless, the process of comparing yourself to the best of one’s peers is quite emotionally draining, and doubly so when one is worried about losing steam on their research.

And then you submit it. 2-3 months of writing, years of planning and dreaming, all boiled down to one button-click submission. And then you try to go back to your research agenda for the year; it’s important to not to let the hot irons cool too much while you are out begging for seeking funding. This is where I am now; awaiting a massive decision, while still trying to forge on with my ongoing research. And we’ve got some terribly exciting new projects and collaborations in the works this year, ranging from my first ever registered report, to big-data fueled investigations of brain-body connectomics and a new computational neuroimaging task, which we’ll be scanning in both MEG and fMRI. I’m really looking forward to the new challenges, techniques, and discoveries this year will bring.

And that is the thought that keeps me going! I know that, even if I am unsuccessful in my quest for funding, I will always find a way to pursue these questions.  Because it is what I do, and what I live for.

Next post: the year to come! I’ll give some teasers about the different projects we are currently working on. Also, in the next weeks I’ll be overhauling this website to give more information about my research, and also plan to start blogging my backlog of recent publications.

The future of neuroconscience and me

The past two years have been a crazy ride. We’ve seen a seeming upheaval of the political world that has shook most of us to our core. Along the way, our social networks have also changed. Sometimes it feels as if we are all reeling along. When I started this blog, I had the perhaps naive view that I was initiating a nexus of some kind of hivemind. Indeed, my research has benefited massively from this blog, and from all of your amazing input and interaction. For a while, all was good. My following grew, I blogged frequently, and it was all quite rewarding. But, when the network started to go mad with current events, perhaps I went a bit mad too. Somewhere along the way the plot was lost. Was neuroconscience my research, a livestream of it, or a hub for my particular flavor of research interests? I must admit that somewhere along the way I lost the plot. The result, I think, has been an increasingly less coherent stream, and my blogposts sputtering down to nothing. I need to find my voice again, and to do that, I think I need to begin splitting out the various streams of my public outreach, research, and personal-is-political daily life. I’m not exactly sure how best to do that, but that didn’t stop me from starting neuroconscience in the first place.

The first steps have been underway for sometime. I’ve already redesigned my this website into a neuroconscience ‘blog’, and a more standard ‘here I am’ academic webfront. My blog will continue to be a collection of rants, thoughts, and musings on all things cogneuro and beyond. My twitter stream @neuroconscience shall become a more content-focused, curated stream of my favorite links, media, and research. Maintaining a high signal-to-noise ratio will be a paramount goal. As usual, this will follow the whims of my ADHD-driven interest; my personal thoughts, political rants, and similar will now be posted exclusively from my new ‘personal’ account @micahgallen. Here you can get my unfiltered voice, all the random musings, and especially loads of navel gazing. In case you are curious, the g stands for ‘galen’, my middle name. I know some of you thought better to keep it all in one, but I believe many who once came to my feed for useful and interesting content are being turned away by the volume and lack of organization in my tweets. My personal account will collect the chaos; neuroconscience will return to the ‘good ‘ol days’ when it was a highly curated stream. And who knows, someday maybe we’ll be joined by a lab account…

Enjoy!

Micah/@neuroconscience

Some post #MarchforScience thoughts.

See the bottom of this post for a collection of great #MarchForScience tweets, images, and my livestream of the London march!

Foremost, thanks to everyone who came out and stood up to show their support. I think it is hard not to look at the worldwide crowds and feel an up-welling of pride and hope. If nothing else the feeling of solidarity, and of sending a loud message that we will not accept a post-evidence society, is well worth the efforts of the organizers and marchers. I just wanted to try and write down a few thoughts I had about the marches, which I’m sure are shared by many others.

Yesterday I think many of us saw, first hand and for the first time, that Science has real people power. Like any other special interest group, we can band together and organize to amplify the reach and influence of our message. Ultimately science requires the creation of a space that is free from politics, and the creation of that space is itself a political act. It is my hope that yesterday planted the seeds of organization that can grow into a movement. We can’t except the general public to stand up for us; it is indeed time to work to ensure a society where science and evidence-based policy flourish.

That being said, I’m sure many of you are also wondering what, if anything yesterday will really achieve. I also have the worry that these marches may ultimately act as another form of ‘slacktivism’, exorcising our anxieties while ultimately achieving little. I can’t speak for the worldwide marches, but I did feel that more could have been done to try and carry the momentum forward. It is a bold first step for scientists to put aside their self-assumed neutrality and stand up for their own cause. At the London march you could feel an almost palpable unease or cautiousness in the march yesterday. It was perhaps the most quiet, calm, and reserved political march i’ve ever participated in – and of course, also a lot of fun. Ultimately if we are going to effect change, this can only be the first step. We need to begin to organize into effective political action communities that can lobby on our behalf.

This also means addressing some of the infighting that arose during the course of the organization of the march. Science cannot turn a blind eye to diversity, or our own issues therein. Effective political action requires building a broad based progressive movement that is inclusive and champions a set of values that does not exclude persons of color, LGBT, or other minorities. I recognize that there are already growing pains; many scientists feel science should inherently be apolitical. But what we’ve seen is that, our work will be politicized no matter what stance we take on it. My hope is that the marches yesterday will embolden us to reach out to community organizers, to build a strong and evidence based movement for political reform. Let yesterday be the planting of a seed, from which a thousand flowers may bloom.


Here are some fun tweets and links from the march:

Clearly DC scientists had a blast! Love this video.

Amazing turn out in Seattle:

Sine game on point:

DC:

Truth!

Amazing aerial shot:

20k marchers in Philly!

Hello, my name is Science!

Updates to site – CV & publications – and future plans

It’s grant writing season! I’m currently working on some fellowship applications, as it’s time to spread my little wings and attempt to fly. This means i’ve finally given some much needed love to this site. Out is the old bio page (which was incredibly outdated), in are a new up-to-date list of publications by year and updated CV (with supervised students!). I need to do some deeper revisions – since I don’t blog anymore i’d like to convert this to a more traditional academic website with the blog as a secondary tab. But for today this at least does the bare minimum. You can stay up to date with our latest publications here:

https://neuroconscience.com/micah-allen/publications/

and my cv is here:

https://neuroconscience.com/micah-allen/

Although this year will be primarily about funding applications (some incredible projects we are planning.. cannot wait to share!), we will also have some stuff underway including an exciting extension of our eLife paper on arousal and confidence, and some new research exploring inter-relations between effective connectivity, mind-wandering, interoception, and sleep (not all together 😛 ). This summer I will also attempt to finally preprint our long in-prep paper on the noise induced confidence bias so stay tuned for that!

Of course, I hope to return to blogging eventually, but at least it was a gangbusters year for publications. If you are craving some neuroconscience writing, go check out one of the ten papers we published!

Thanks as always to everyone who motivates our research.

Yours always,

Micah Allen aka neuroconscience