Nothing like a long-awaited vacation. My wife and I take our yearly two-week vacation in September. Although it is a bit late, we enjoy the timing as most summer spots are still warm but have few tourists and low prices. This year we are in Francesca’s hometown of Marostica, Italy, for not one but TWO weddings. It will be a true test of my self-discipline to not gain several kilos on this trip!
That being said, phew am I ready for some vacation. The later date means that as the summer stretches on, my work ethic flags a bit and I become prone to take more and more staycation days. I really believe this mid-summer lethargy is some kind of holdover from grade-school conditioning; the body just cannot forget the joy that is the first day of summer vacation. And I do believe that to avoid becoming repetitive, creative, passionate work requires long periods of rest and enjoyment. And so, I’ve had a fun summer of travelling, building collaborations, and putting pieces in place for new projects and applications.
Overall, this has been a busy, if somewhat odd year. Most of the first half was taken up by my first ever fellowship-level grant application. In 2016, I had just found my publishing stride, seemingly clearing one item of after another off my desk. Publishing finally felt ‘normal’, like a part of the job with manageable known-knowns and known-unknowns. I even enjoyed it. And of course, there was a much needed, although transient period of enjoying the feeling of accomplishment. It is nice to have a stable of new papers across a spread of journals and think, ‘well, that is dinner for the next few years secured’. It was in the midst of this semi-tranquility that I realized the end of my long and luxurious post-doc was looming near. Suddenly I would have to do something entirely new; the complexities and uncertainties of applying for start-up funds for my first lab made me again feel like an anxious graduate student.
Of course, my first attempt took ages, was rife with errors, and was overly complex in almost every way. Grant writing, like everything, is really a learning process; I wish I had applied for more smaller grants in my postdoc, just to better prepare me for the process itself. Writing papers does not really train you for this task, which is much more ‘sales’-oriented. And it doesn’t feel very productive. Here you are pouring out an amount of work which could easily produce 2-3 new papers in itself, into an endeavor which is overwhelmingly likely to be a failure. It feels a bit like taking all of your hard-won momentum and dumping it into a bin labelled ‘unlikely hopes and dreams’.
But in the end, it really was worth it, at least for me. Even if the outcome itself is uncertain, just the process of collecting all of your achievements under one banner will strengthen and clarify your view of yourself as a professional. More importantly, having to think in a highly unconstrained, big picture way will force you to find the strongest themes within your research. Nevertheless, the process of comparing yourself to the best of one’s peers is quite emotionally draining, and doubly so when one is worried about losing steam on their research.
And then you submit it. 2-3 months of writing, years of planning and dreaming, all boiled down to one button-click submission. And then you try to go back to your research agenda for the year; it’s important to not to let the hot irons cool too much while you are out begging for seeking funding. This is where I am now; awaiting a massive decision, while still trying to forge on with my ongoing research. And we’ve got some terribly exciting new projects and collaborations in the works this year, ranging from my first ever registered report, to big-data fueled investigations of brain-body connectomics and a new computational neuroimaging task, which we’ll be scanning in both MEG and fMRI. I’m really looking forward to the new challenges, techniques, and discoveries this year will bring.
And that is the thought that keeps me going! I know that, even if I am unsuccessful in my quest for funding, I will always find a way to pursue these questions. Because it is what I do, and what I live for.
Next post: the year to come! I’ll give some teasers about the different projects we are currently working on. Also, in the next weeks I’ll be overhauling this website to give more information about my research, and also plan to start blogging my backlog of recent publications.
As you read the words on this page, you might also notice a growing feeling of confidence that you understand their meaning. Every day we make decisions based on ambiguous information and in response to factors over which we have little or no control. Yet rather than being constantly paralysed by doubt, we generally feel reasonably confident about our choices. So where does this feeling of confidence come from?
Computational models of human decision-making assume that our confidence depends on the quality of the information available to us: the less ambiguous this information, the more confident we should feel. According to this idea, the information on which we base our decisions is also the information that determines how confident we are that those decisions are correct. However, recent experiments suggest that this is not the whole story. Instead, our internal states — specifically how our heart is beating and how alert we are — may influence our confidence in our decisions without affecting the decisions themselves.
To test this possibility, Micah Allen and co-workers asked volunteers to decide whether dots on a screen were moving to the left or to the right, and to indicate how confident they were in their choice. As the task became objectively more difficult, the volunteers became less confident about their decisions. However, increasing the volunteers’ alertness or “arousal” levels immediately before a trial countered this effect, showing that task difficulty is not the only factor that determines confidence. Measures of arousal — specifically heart rate and pupil dilation — were also related to how confident the volunteers felt on each trial. These results suggest that unconscious processes might exert a subtle influence on our conscious, reflective decisions, independently of the accuracy of the decisions themselves.
The next step will be to develop more refined mathematical models of perception and decision-making to quantify the exact impact of arousal and other bodily sensations on confidence. The results may also be relevant to understanding clinical disorders, such as anxiety and depression, where changes in arousal might lock sufferers into an unrealistically certain or uncertain world.
The PNAS journal club also published a useful summary, including some great quotes from Phil Corlett and Rebecca Todd:
… Allen’s findings are “relevant to anyone whose job is to make difficult perceptual judgments trying to see signal in a lot of noise,” such as radiologists or baggage inspectors, says cognitive neuroscientist Rebecca Todd at the University of British Columbia in Vancouver, who did not take part in the research. Todd suggests that people who apply decision-making models to real world problems need to better account for the influence of internal or emotional states on confidence.
The fact that bodily states can influence confidence may even shed light on mental disorders, which often involve blunted or heightened signals from the body. Symptoms could result from how changes in sensory input affect perceptual decision-making, says cognitive neuroscientist and schizophrenia researcher Phil Corlett at Yale University, who did not participate in this study.
Corlett notes that some of the same ion channels involved in regulating heart rate are implicated in schizophrenia as well. “Maybe boosting heart rate might lead people with schizophrenia to see or hear things that aren’t present,” he speculates, adding that future work could analyze how people with mental disorders perform on these tasks…
I also wrote a blog post summarizing the article for The Conversation:
How do we become aware of our own thoughts and feelings? And what enables us to know when we’ve made a good or bad decision? Every day we are confronted with ambiguous situations. If we want to learn from our mistakes, it is important that we sometimes reflect on our decisions. Did I make the right choice when I leveraged my house mortgage against the market? Was that stop light green or red? Did I really hear a footstep in the attic, or was it just the wind?
When events are more uncertain, for example if our windscreen fogs up while driving, we are typically less confident in what we’ve seen or decided. This ability to consciously examine our own experiences, sometimes called introspection, is thought to depend on the brain appraising how reliable or “noisy” the information driving those experiences is. Some scientists and philosophers believe that this capacity for introspection is a necessary feature of consciousness itself, forging the crucial link between sensation and awareness.
One important theory is that the brain acts as a kind of statistician, weighting options by their reliability, to produce a feeling of confidence more or less in line with what we’ve actually seen, felt or done. And although this theory does a reasonably good job of explaining our confidence in a variety of settings, it neglects an important fact about our brains – they are situated within our bodies. Even now, as you read the words on this page, you might have some passing awareness of how your socks sit on your feet, how fast your heart is beating or if the room is the right temperature.
Even if you were not fully aware of these things, the body is always shaping how we experience ourselves and the world around us. That is to say experience is always from somewhere, embodied within a particular perspective. Indeed, recent research suggests that our conscious awareness of the world is very much dependent on exactly these kinds of internal bodily states. But what about confidence? Is it possible that when I reflect on what I’ve just seen or felt, my body is acting behind the scenes? …
The New Scientist took an interesting angle not as explored in the other write-ups, and also included a good response from Ariel Zylberberg:
“We were tricking the brain and changing the body in a way that had nothing to do with the task,” Allen says. In doing so, they showed that a person’s sense of confidence relies on internal as well as external signals – and the balance can be shifted by increasing your alertness.
Allen thinks the reaction to disgust suppressed the “noise” created by the more varied movement of the dots during the more difficult versions of the task. “They’re taking their own confidence as a cue and ignoring the stimulus in the world.”
“It’s surprising that they show that confidence can be motivated by processes inside a person, instead of what we tend to believe, which is that confidence should be motivated by external things that affect a decision,” says Ariel Zylberberg at Columbia University in New York. “Disgust leads to aversion. If you try a food and it’s disgusting, you walk away from it,” says Zylberberg. “Here, if you induce disgust, high confidence becomes lower and low confidence becomes higher. It could be that disgust is generating this repulsion.”
It is not clear whether it is the feeling of disgust that changes a person’s confidence in this way, or whether inducing alertness with a different emotion, such as anger or fear, would have the same effect.
You can find all the coverage for our article using these excellent services, altmetric & ImpactStory.
Every now and then, i’m browsing RSS on the tube commute and come across a study that makes me laugh out loud. This of course results in me receiving lots of ‘tuts’ from my co-commuters. Anyhow, the latest such entry to the world of cognitive neuroscience is a study examining brain response to drum beats in shamanic practitioners. Michael Hove and colleagues of the Max Planck Institute in Leipzig set out to study “Perceptual Decoupling During an Absorptive State of Consciousness” using functional magnetic resonance imaging (fMRI). What exactly does that mean? Apparently: looking at how brain connectivity in ‘experienced shamanic practitioners’ changes when they listen to rhythmic drumming. Hove and colleagues explain that across a variety of cultures, ‘quasi-isochronous drumming’ is used to induce ‘trance states’. If you’ve ever been dancing around a drum circle in the full moon light, or tranced out to shpongle in your living room, I guess you get the feeling right?
Anyway, Hove et al recruited 15 participants who were trained in “core shamanism,” described as:
“a system of techniques developed and codified by Michael Harner (1990) based on cross-cultural commonalities among shamanic traditions. Participants were recruited through the German-language newsletter of the Foundation of Shamanic Studies and by word of mouth.”
They then played these participants rhythmic isochronous drumming (trance condition) versus drumming with a more regular timing. In what might be the greatest use of a Likert scale of all time, Participants rated if [they] “would describe your experience as a deep shamanic journey?” (1 = not at all; 7 = very much so)”, and indeed described the trance condition as more well, trancey. Hove and colleagues then used a fairly standard connectivity analysis, examining eigenvector centrality differences between the two drumming conditions, as well as seed-based functional connectivity:
Hove et al report that compared to the non-trance conditions, the posterior/dorsal cingulate, insula, and auditory brainstem regions become more ‘hublike’, as indicated by a higher overall degree centrality of these regions. Further, they experienced stronger functionally connectivity with the posterior cingulate cortex. I’ll let Hove and colleagues explain what to make of this:
“In sum, shamanic trance involved cooperation of brain networks associated with internal thought and cognitive control, as well as a dampening of sensory processing. This network configuration could enable an extended internal train of thought wherein integration and moments of insight can occur. Previous neuroscience work on trance is scant, but these results indicate that successful induction of a shamanic trance involves a reconfiguration of connectivity between brain regions that is consistent across individuals and thus cannot be dismissed as an empty ritual.”
Ultimately the authors conclusion seems to be that these brain connectivity differences show that, if nothing else, something must be ‘really going on’ in shamanic states. To be honest, i’m not really sure anyone disagreed with that to begin with. Collectively I can’t critique this study without thinking of early (and ongoing) meditation research, where esoteric monks are placed in scanners to show that ‘something really is going on’ in meditation. This argument to me seems to rely on a folk-psychological misunderstanding of how the brain works. Even in placebo conditioning, a typical example of a ‘mental effect’, we know of course that changes in the brain are responsible. Every experience (regardless how complex) has some neural correlate. The trick is to relate these neural factors to behavioral ones in a way that actually advances our understanding of the mechanisms and experiences that generate them. The difficulty with these kinds of studies is that all we can do is perform reverse inference to try and interpret what is going on; the authors conclusion about changes in sensory processing is a clear example of this. What do changes in brain activity actually tell us about trance (and other esoteric) states ? Certainly they don’t reveal any particular mechanism or phenomenological quality, without being coupled to some meaningful understanding of the states themselves. As a clear example, we’re surely pushing reductionism to its limit by asking participants to rate a self-described transcendent state using a unidirectional likert scale? The authors do cite Francisco Varela (a pioneer of neurophenemonological methods), but don’t seem to further consider these limitations or possible future directions.
Overall, I don’t want to seem overly critical of this amusing study. Certainly shamanic traditions are a deeply important part of human cultural history, and understanding how they impact us emotionally, cognitively, and neurologically is a valuable goal. For what amounts to a small pilot study, the protocols seem fairly standard from a neuroscience standpoint. I’m less certain about who these ‘shamans’ actually are, in terms of what their practice actually constitutes, or how to think about the supposed ‘trance states’, but I suppose ‘something interesting’ was definitely going on. The trick is knowing exactly what that ‘something’ is.
Future studies might thus benefit from a better direct characterization of esoteric states and the cultural practices that generate them, perhaps through collaboration with an anthropologist and/or the application of phenemonological and psychophysical methods. For now however, i’ll just have to head to my local drum circle and vibe out the answers to these questions.
The structure, function, and connectivity of the brain changes considerably as we age1–4. Recent advances in MRI physics and neuroimaging have led to the development of new techniques which allow researchers to map quantitative parameters sensitive to key histological brain factors such as iron and myelination5–7. These quantitative techniques reveal the microstructure of the brain by leveraging our knowledge about how different tissue types respond to specialized MRI-sequences, in a fashion similar to diffusion-tensor imaging, combined with biophysical modelling. Here at the Wellcome Trust Centre for Neuroimaging, our physicists and methods specialists have teamed up to push these methods to their limit, delivering sub-millimetre, whole-brain acquisition techniques that can be completed in less than 30 minutes. By combining advanced biophysical modelling with specialized image co-registration, segmentation, and normalization routines in a process known as ‘voxel-based quantification’ (VBQ), these methods allow us to image key markers of histological brain factors. Here is a quick description of the method from a primer at our centre’s website:
Anatomical MR imaging has not only become a cornerstone in clinical diagnosis but also in neuroscience research. The great majority of anatomical studies rely on T1-weighted images for morphometric analysis of local gray matter volume using voxel-based morphometry (VBM). VBM provides insight into macroscopic volume changes that may highlight differences between groups; be associated with pathology or be indicative of plasticity. A complimentary approach that has sensitivity to tissue microstructure is high resolution quantitative imaging. Whereas in T1-weighted images the signal intensity is in arbitrary units and cannot be compared across sites or even scanning sessions, quantitative imaging can provide neuroimaging biomarkers for myelination, water and iron levels that are absolute measures comparable across imaging sites and time points.
These biomarkers are particularly important for understanding aging, development, and neurodegeneration throughout the lifespan. Iron in particular is critical for the healthy development and maintenance of neurons, where it is used to drive ATP in glial support cells to create and maintain the myelin sheaths that are critical for neural function. Nutritional iron deficiency during foetal, childhood, or even adolescent development is linked to impaired memory and learning, and altered hippocampal function and structure8,9. Although iron homeostasis in the brain is hugely complex and poorly understood, we know that run-away iron in the brain is a key factor in degenerative diseases like Alzheimer’s and Parkinson’s10–16. Data from both neuroimaging and post-mortem studies indicate that brain iron increases throughout the lifespan, particular in structures rich in neuromelanin such as the basal ganglia, caudate, and hippocampus. In Alzheimer’s and Parkinson’s for example, it is thought that runaway iron in these structures eventually overwhelms the glial systems responsible for chelating (processing) iron, and as iron becomes neurotoxic at excessive levels, leading to a runaway chain of neural atrophy throughout the brain. Although we don’t know how this process begins (scientist believe factors including stress and disease-related neuroinflammation, normal aging processes, and genetics all probably contribute), understanding how iron and myelination change over the lifespan is a crucial step towards understanding these diseases. Furthermore, because VBQ provides quantitative markers, data can be pooled and compared across research centres.
Recently I’ve been doing a lot of work with VBQ, examining for example how individual differences in metacognition and empathy relate to brain microstructure. One thing we were interested in doing with our data was examining if we could follow-up on previous work from our centre showing wide-spread age-related changes in iron and myelination. This was a pretty easy analysis to do using our 59 subjects, so I quickly put together a standard multiple regression model including age, gender, and total intracranial volume. Below are the maps for magnetization transfer (MT), longitudinal relaxation rate (R1), and effective transverse relaxation rate (R2*), which measure brain macromolecules/water, myelination, and iron respectively (click each image to see explore the map in neurovault!). All maps are FWE-cluster corrected, adjusting for non-sphericity, at a p < 0.001 inclusion threshold.
You can see that there is increased MT throughout the brain, particularly in the amygdala, post central gyrus, thalamus, and other midbrain and prefrontal areas. MT (roughly) measures water in the brain, and is mostly sensitive to myelination and macromolecules such as microglia and astrocytes. Interestingly our findings here contrast to Callaghan et al (2014), who found decreases in myelination whereas we find increases. This is probably explained by differences in our samples.
R1 shows much more restricted effects, with increased R1 only in the left post-central gyrus, at least in this sample. This is in contrast to Callaghan et al2 who found extensive negative MT & R1 effects, but that was in a much larger sample and with a much wider age-related variation (19-75, mean = 45). Interestingly, Martina and colleagues actually reported widespread decreases in R1, whereas we find no decreases and instead slight increases in both MT and R1. This may imply a U-shaped response of myelin to aging, which would fit with previous structural studies.
Our iron-sensitive map (R2*) somewhat reproduces their effects however, with significant increases in the hippocampus, posterior cingulate, caudate, and other dopamine-rich midbrain structures:
Wow! What really strikes me about this is that we can find age-related increases in a very young sample of mostly UCL students. Iron is already accumulating in the range from 18-39. For comparison, here are the key findings from Martina’s paper:
The age effects in left hippocampus are particularly interesting as we found iron and myelination in this area related to these participant’s metacognitive ability, while controlling for age. Could this early life iron accumulation be a predictive biomarker for the possibility to develop neurodegenerative disease later in life? I think so. Large sample prospective imaging could really open up this question; does anyone know if UK Biobank will collect this kind of data? UK biobank will eventually contain ~200k scans with full medical workups and follow-ups. In a discussion with Karla Miller on facebook she mentioned there may be some low-resolution R2* images in that data. It could really be a big step forward to ask whether the first time-point predicts clinical outcome; ultimately early-life iron accumulation could be a key biomarker for neuro-degeneration.
Gogtay, N. & Thompson, P. M. Mapping gray matter development: implications for typical development and vulnerability to psychopathology. Brain Cogn.72, 6–15 (2010).
Callaghan, M. F. et al. Widespread age-related differences in the human brain microstructure revealed by quantitative magnetic resonance imaging. Neurobiol. Aging35, 1862–1872 (2014).
Sala-Llonch, R., Bartrés-Faz, D. & Junqué, C. Reorganization of brain networks in aging: a review of functional connectivity studies. Front. Psychol.6, 663 (2015).
Sugiura, M. Functional neuroimaging of normal aging: Declining brain, adapting brain. Ageing Res. Rev. (2016). doi:10.1016/j.arr.2016.02.006
Weiskopf, N., Mohammadi, S., Lutti, A. & Callaghan, M. F. Advances in MRI-based computational neuroanatomy: from morphometry to in-vivo histology. Curr. Opin. Neurol.28, 313–322 (2015).
Callaghan, M. F., Helms, G., Lutti, A., Mohammadi, S. & Weiskopf, N. A general linear relaxometry model of R1 using imaging data. Magn. Reson. Med.73, 1309–1314 (2015).
Mohammadi, S. et al. Whole-Brain In-vivo Measurements of the Axonal G-Ratio in a Group of 37 Healthy Volunteers. Front. Neurosci.9, (2015).
Carlson, E. S. et al. Iron Is Essential for Neuron Development and Memory Function in Mouse Hippocampus. J. Nutr.139, 672–679 (2009).
Georgieff, M. K. The role of iron in neurodevelopment: fetal iron deficiency and the developing hippocampus. Biochem. Soc. Trans.36, 1267–1271 (2008).
Castellani, R. J. et al. Iron: The Redox-active Center of Oxidative Stress in Alzheimer Disease. Neurochem. Res.32, 1640–1645 (2007).
Bartzokis, G. Alzheimer’s disease as homeostatic responses to age-related myelin breakdown. Neurobiol. Aging32, 1341–1371 (2011).
Gouw, A. A. et al. Heterogeneity of white matter hyperintensities in Alzheimer’s disease: post-mortem quantitative MRI and neuropathology. Brain131, 3286–3298 (2008).
Bartzokis, G. et al. MRI evaluation of brain iron in earlier- and later-onset Parkinson’s disease and normal subjects. Magn. Reson. Imaging17, 213–222 (1999).
Berg, D. et al. Brain iron pathways and their relevance to Parkinson’s disease. J. Neurochem.79, 225–236 (2001).
Dexter, D. T. et al. Increased Nigral Iron Content and Alterations in Other Metal Ions Occurring in Brain in Parkinson’s Disease. J. Neurochem.52, 1830–1836 (1989).
Jellinger, P. D. K., Paulus, W., Grundke-Iqbal, I., Riederer, P. & Youdim, M. B. H. Brain iron and ferritin in Parkinson’s and Alzheimer’s diseases. J. Neural Transm. – Park. Dis. Dement. Sect.2, 327–340 (1990).
Much like we picture ourselves, we tend to assume that each individual brain is a bit of a unique snowflake. When running a brain imaging experiment it is common for participants or students to excitedly ask what can be revealed specifically about them given their data. Usually, we have to give a disappointing answer – not all that much, as neuroscientists typically throw this information away to get at average activation profiles set in ‘standard’ space. Now a new study published today in Nature Neuroscience suggests that our brains do indeed contain a kind of person-specific fingerprint, hidden within the functional connectome. Perhaps even more interesting, the study suggests that particular neural networks (e.g. frontoparietal and default mode) contribute the greatest amount of unique information to your ‘neuro-profile’ and also predict individual differences in fluid intelligence.
To do so lead author Emily Finn and colleagues at Yale University analysed repeated sets of functional magnetic resonance imaging (fMRI) data from 128 subjects over 6 different sessions (2 rest, 4 task), derived from the Human Connectome Project. After dividing each participant’s brain data into 268 nodes (a technique known as “parcellation”), Emily and colleagues constructed matrices of the pairwise correlation between all nodes. These correlation matrices (below, figure 1b), which encode the connectome or connectivity map for each participant, were then used in a permutation based decoding procedure to determine how accurately a participant’s connectivity pattern could be identified from the rest. This involved taking a vector of edge values (connection strengths) from a participant in the training set and correlating it with a similar vector sampled randomly with replacement from the test set (i.e. testing whether one participant’s data correlated with another’s). Pairs with the highest correlation where then labelled “1” to indicate that the algorithm assigned a matching identity between a particular train-test pair. The results of this process were then compared to a similar one in which both pairs and subject identity were randomly permuted.
At first glance, the results are impressive:
Identification was performed using the whole-brain connectivity matrix (268 nodes; 35,778 edges), with no a priori network definitions. The success rate was 117/126 (92.9%) and 119/126 (94.4%) based on a target-database of Rest1-Rest2 and the reverse Rest2-Rest1, respectively. The success rate ranged from 68/126 (54.0%) to 110/126 (87.3%) with other database and target pairs, including rest-to-task and task-to-task comparisons.
This is a striking result – not only could identity be decoded from one resting state scan to another, but the identification also worked when going from rest to a variety of tasks and vice versa. Although classification accuracy dropped when moving between different tasks, these results were still highly significant when compared to the random shuffle, which only achieved a 5% success rate. Overall this suggests that inter-individual patterns in connectivity are highly reproducible regardless of the context from which they are obtained.
The authors then go on to perform a variety of crucial control analyses. For example, one immediate worry is that that the high identification might be driven by head motion, which strongly influences functional connectivity and is likely to show strong within-subject correlation. Another concern might be that the accuracy is driven primarily by anatomical rather than functional features. The authors test both of these alternative hypotheses, first by applying the same decoding approach to an expanded set of root-mean square motion parameters and second by testing if classification accuracy decreased as the data were increasingly smoothed (which should eliminate or reduce the contribution of anatomical features). Here the results were also encouraging: motion was totally unable to predict identity, resulting in less than 5% accuracy, and classification accuracy remained essentially the same across smoothing kernels. The authors further tested the contribution of their parcellation scheme to the more common and coarse-grained Yeo 8-network solution. This revealed that the coarser network division seemed to decrease accuracy, particularly for the fronto-parietal network, a decrease that was seemingly driven by increased reliability of the diagonal elements of the inter-subject matrix (which encode the intra-subject correlation). The authors suggest this may reflect the need for higher spatial precision to delineate individual patterns of fronto-parietal connectivity. Although this intepretation seems sensible, I do have to wonder if it conflicts with their smoothing-based control analysis. The authors also looked at how well they could identify an individual based on the variability of the BOLD signal in each region and found that although this was also significant, it showed a systematic decrease in accuracy compared to the connectomic approach. This suggests that although at least some of what makes an individual unique can be found in activity alone, connectivity data is needed for a more complete fingerprint. In a final control analysis (figure 2c below), training simultaneously on multiple data sets (for example a resting state and a task, to control inherent differences in signal length) further increased accuracy to as high as 100% in some cases.
Having established the robustness of their connectome fingerprints, Finn and colleagues then examined how much each individual cortical node contributed to the identification accuracy. This analysis revealed a particularly interesting result; frontal-parietal and midline (‘default mode’) networks showed the highest contribution (above, figure 2a), whereas sensory areas appeared to not contribute at all. This compliments their finding that the more coarse grained Yeo parcellation greatly reduced the contribution of these networks to classificaiton accuracy. Further still, Finn and colleagues linked the contributions of these networks to behavior, examining how strongly each network fingerprint predicted an overall index of fluid intelligence (g-factor). Again they found that fronto-parietal and default mode nodes were the most predictive of inter-individual differences in behaviour (in opposite directions, although I’d hesitate to interpret the sign of this finding given the global signal regression).
So what does this all mean? For starters this is a powerful demonstration of the rich individual information that can be gleaned from combining connectome analyses with high-volume data collection. The authors not only showed that resting state networks are highly stable and individual within subjects, but that these signatures can be used to delineate the way the brain responds to tasks and even behaviour. Not only is the study well powered, but the authors clearly worked hard to generalize their results across a variety of datasets while controlling for quite a few important confounds. While previous studies have reported similar findings in structural and functional data, I’m not aware of any this generalisable or specific. The task-rest signature alone confirms that both measures reflect a common neural architecture, an important finding. I could be a little concerned about other vasculature or breath-related confounds; the authors do remove such nuisance variables though, so this may not be a serious concern (though I am am not convinced their use of global signal regression to control these variables is adequate). These minor concerns none-withstanding, I found the network-specific results particularly interesting; although previous studies indicate that functional and structural heterogeneity greatly increases along the fronto-parietal axis, this study is the first demonstration to my knowledge of the extremely high predictive power embedded within those differences. It is interesting to wonder how much of this stability is important for the higher-order functions supported by these networks – indeed it seems intuitive that self-awareness, social cognition, and cognitive control depend upon acquired experiences that are highly individual. The authors conclude by suggesting that future studies may evaluate classification accuracy within an individual over many time points, raising the interesting question: Can you identify who I am tomorrow by how my brain connects today? Or am I “here today, gone tomorrow”?
Only time (and connectomics) may tell…
thanks to Kate Mills for pointing out this interesting PLOS ONE paper from a year ago (cited by Finn et al), that used similar methods and also found high classification accuracy, albeit with a smaller sample and fewer controls:
In the spirit of procrastination, here is a random list I made up of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. These are purely pulled from the depths of speculation, so please do feel free to disagree. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.
7 hot trends in cognitive neuroscience according to me
Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a core topic of research in areas ranging from conscious awareness to memory and navigation.
Complex systems, dynamics, and emergence
Again, a concept as old as neuroscience itself, but this one seems to be piggy-backing on several trends towards a new resurgence. As neuroscience grows bored of blobology, and our analysis methods move increasingly towards modelling dynamical interactions (see above) and complex networks, our explanatory metaphors more frequently emphasize brain dynamics and emergent causation. This is a clear departure from the boxological approach that was so prevalent in the 80’s and 90’s.
Direct intervention and causal inference
Pseudo-invasive techniques like transcranial direct-current stimulation are on the rise, partially because they allow us to perform virtual lesion studies in ways not previously possible. Likewise, exponential growth of neurobiological and genetic techniques has ushered in the era of optogenetics, which allows direct manipulation of information processing at a single neuron level. Might this trend also reflect increased dissatisfaction with the correlational approaches that defined the last decade? You could also include steadily increasing interest in pharmacological neuroimaging under this category.
Computational modelling and reinforcement learning
With the hype surrounding Google’s £200 million acquisition of Deep Mind, and the recent Nobel Prize award for the discovery of grid cells, computational approaches to neuroscience are hotter than ever. Hardly a day goes by without a reinforcement learning or similar paper being published in a glossy high-impact journal. This one takes many forms but it is undeniable that model-based approaches to cognitive neuroscience are all the rage. There is also a clear surge of interest in the Bayesian Brain approach, which could almost have it’s own bullet point. But that would be too self serving 😉
Gain control is a very basic mechanism found throughout the central nervous system. It can be understood as the neuromodulatory weighting of post-synaptic excitability, and is thought to play a critical role in contextualizing neural processing. Gain control might for example allow a neuron that usually encodes a positive prediction error to ‘flip’ its sign to encode negative prediction error under a certain context. Gain is thought to be regulated via the global interaction of neural modulators (e.g. dopamine, acetylcholine) and links basic information theoretic processes with neurobiology. This makes it a particularly desirable tool for understanding everything from perceptual decision making to basic learning and the stabilization of oscillatory dynamics. Gain control thus links computational, biological, and systems level work and is likely to continue to attract a lot of attention in the near future.
Hierarchies that are not really hierarchies
Neuroscience loves its hierarchies. For example, the Van Essen model of how visual feature detection proceeds through a hierarchy of increasingly abstract functional processes is one of the core explanatory tools used to understand vision in the brain. Currently however there is a great deal of connectomic and functional work pointing out interesting ways in which global or feedback connections can re-route and modulate processes from the ‘top’ directly to the ‘bottom’ or vice versa. It’s worth noting this trend doesn’t do away with the old notions of hierarchies, but instead just renders them a bit more complex and circular. Put another way, it is currently quite trendy to show ‘the top is the bottom’ and ‘the bottom is the top’. This partially relates to the increased emphasis on emergence and complexity discussed above. A related trend is extension of what counts as the ‘bottom’, with low-level subcortical or even first order peripheral neurons suddenly being ascribed complex abilities typically reserved for cortical processes.
Primary sensations that are not so primary
Closely related to the previous point, there is a clear trend in the perceptual sciences of being increasingly liberal about how ‘primary’ sensory areas really are. I saw this first hand at last year’s Vision Sciences Society which featured at least a dozen posters showing how one could decode tactile shape from V1, or visual frequency from A1, and so on. Again this is probably related to the overall movement towards complexity and connectionism; as we lose our reliance on modularity, we’re suddenly open to a much more general role for core sensory areas.
Interestingly I didn’t include things like multi-modal or high resolution imaging as I think they are still actually emerging and have not quite fully arrived yet. But some of these – computational and connectomic modelling for example – are clearly part and parcel of contemporary zeitgeist. It’s also very interesting to look over this list, as there seems to be a clear trend towards complexity, connectionism, and dynamics. Are we witnessing a paradigm shift in the making? Or have we just forgotten all our first principles and started mangling any old thing we can get published? If it is a shift, what should we call it? Something like ‘computational connectionism’ comes to mind. Please feel free to add points or discuss in the comments!
Tonight I was playing around with some of the top features in neurosynth (the searchable terms with the highest number of studies containing that term). You can find the list here, just sort by the number of studies. I excluded the top 3 terms which are boring (e.g. “image”, “response”, and “time”) and whose extremely high weights would mess up the wordle. I then created a word-cloud weighted so that the size reflects the number of studies for each term.
Here are the top 200 terms sized according to number times reported in neurosynth’s 5809 indexed fMRI studies:
Pretty neat! These are the 200 terms the neurosynth database has the most information on, and is a pretty good overview of key concepts and topics in our field! I am sure there is something useful for everyone in there 😀