Is there a ‘basement’ for quirky psychological research?

Beware the Basement!
Beware the Basement!

 One thing I will never forget from my undergraduate training in psychology was the first lecture of my personality theory class. The professor started the lecture by informing us that he was quite sure that of the 200+ students in the lecture hall, the majority of us were probably majoring in psychology because we thought it would be neat to study sex, consciousness, psychedelics, paranormal experience, meditation, or the ilk. He then informed us this was a trap that befell almost all new psychology students, as we were all drawn to the study of the mind by the same siren call of the weird and wonderful human psyche. However he warned, we should be very, very careful not to reveal these suppressed interests until we were well established (I’m assuming he meant tenured) researchers- otherwise we’d risk being thrown into the infamous ‘basement of psychology’, never to be heard from again.

This colorful lecture really stuck with me through the years; I still jokingly refer to the basement whenever a more quirky research topic comes up. Of course I did a pretty poor job of following this advice, seeing as my first project as a PhD student involved meditation, but nonetheless I have repressed an academic interest in more risque topics throughout my career. And i’m not really actively avoiding them for fear of being placed in the basement – i’m more just following my own pragmatic research interests, and waiting for some day when I have more time and freedom to follow ideas that don’t directly tie into the core research line I’m developing.

But still. That basement. Does it really exist? In a world where papers about having full bladders renders us more politically conservative can make it into prestigious journals, or where scientists scan people having sex inside a scanner just to see what happens, or where psychologists seriously debate the possibility of precognition – can anything really be taboo? Or can we still distinguish from these flightier topics a more serious avenue of research? And what should be said about those who choose such topics?

Personally I think the idea of a ‘basement’ is largely a hold-over from the heyday of behaviorism, when psychologists were seriously concerned about positioning psychology as a hard science. Cognitivism has given rise to an endless bevy of serious topics that would have once been taboo; consciousness, embodiment, and emotion to name a few. Still, in the always-snarky twittersphere, one can’t but help feel that there is still a certain amount of nose thumbing at certain topics.

I think really, in the end, it’s not the topic so much as the method. Chris Frith once told me something to the tune of ‘in [cognitive neuroscience] all the truly interesting phenomenon are beyond proper study’. We know the limitations of brain scans and reaction times, and so tend to cringe a bit when someone tries to trot out the latest silly-super-human special interest infotainment paper.

What do you think? Is there a ‘basement’ for silly research? And if so, what defines what sorts of topics should inhabit its dank confines?

We the Kardashians are Democratizing Science

I had a good laugh this weekend at a paper published to Genome Biology. Neil Hall, the author of the paper and well-established Liverpool biologist, writes that in the brave new era of social media, there “is a danger that this form of communication is gaining too high a value and that we are losing sight of key metrics of scientific value, such as citation indices.” Wow, what a punchline! According to Neil, we’re in danger of forgetting that tweets and blogposts are, according to him, the worthless gossip of academia. After all, who reads Nature and Science these days?? I know so many colleagues getting big grants and tenure track jobs just over their tweets! Never mind that Neil himself has about 11 papers published in Nature journals – or perhaps we are left to sympathize with the poor, untweeted author? Outside of bitter sarcasm, the article is a fun bit of satire, and I’d like to think charitably that it was aimed not only at ‘altmetrics’, but at the metric enterprise in general. Still, I agree totally with Kathryn Clancy that the joke fails insofar as it seems to be ‘punching down’ at those of us with less established CVs than Neil, who take to social media in order to network and advance our own fledgling research profiles. I think it also belies a critical misapprehension of how social media fits into the research ecosystem common among established scholars. This sentiment is expressed rather precisely by Neil when discussing his Kardashian index:

The Kardashian Index
The Kardashian Index

“In an age dominated by the cult of celebrity we, as scientists, need to protect ourselves from mindlessly lauding shallow popularity and take an informed and critical view of the value we place on the opinion of our peers. Social media makes it very easy for people to build a seemingly impressive persona by essentially ‘shouting louder’ than others. Having an opinion on something does not make one an expert.”

So there you have it. Twitter equals shallow popularity. Never mind the endless possibilities of having seamless networked interactions with peers from around the world. Never mind sharing the latest results, discussing them, and branching these interactions into blog posts that themselves evolve into papers. Forget entirely that without this infosphere of interaction, we’d be left totally at the whims of Impact Factor to find interesting papers among the thousands published daily. What it’s really all about is building a “seemingly impressive persona” by “shouting louder than others”. What then does constitute effective scientific output, Neil? The answer it seems – more high impact papers:

“I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive – if your K-index gets above 5, then it’s time to get off Twitter and write those papers.”

Well then, I’m glad we covered that. I’m sure there were many scientists or scholars out there who amid the endless cycle of insane job pressure, publish or perish horse-racing, and blood feuding for grants thought, ‘gee I’d better just stop this publishing thing entirely and tweet instead’. And likewise, I’m sure every young scientist looks at ‘Kardashians’ and thinks, ‘hey I’d better suspend all critical thinking, forget all my training, and believe everything this person says’. I hope you can feel me rolling my eyes.  Seriously though – this represents a fundamental and common misunderstanding of the point of all this faffing about on the internet. Followers, impact, and notoriety are all poorly understood side-effects of this process; they are neither the means nor goal. And never mind those less concrete (and misleading) contributions like freely shared code, data, or thoughts – the point here is to blather and gossip!

While a (sorta) funny joke, it is this point that is done the most disservice by Neil’s article. We (the Kardashians) are democratizing science. We are filtering the literally unending deluge of papers to try and find the most outrageous, the most interesting, and the most forgotten, so that they can see the light of day beyond wherever they were published and forgotten. We seek these papers to generate discussion and to garner attention where it is needed most. We are the academy’s newest, first line of defense, contextualizing results when the media runs wild with them. We tweet often because there is a lot to tweet, and we gain followers because the things we tweet are interesting. And we do all of this without the comfort of a lofty CV or high impact track record, with little concrete assurance that it will even benefit us, all while still trying to produce the standard signs of success. And it may not seem like it now – but in time it will be clear that what we do is just as much a part of the scientific process as those lofty Nature papers. Are we perfect? No. Do we sometimes fall victim to sensationalism or crowd mentality? Of course – we are only fallible human beings, trying to find and create utility within a new frontier. We may not be the filter science deserves – but we are the one it needs. Wear your Kardshian index with pride.

Short post: why I share (and share often)

If you follow my social media activities I am sure by now that you know me as a compulsive share-addict. Over the past four years I have gradually increased both the amount of incoming and outgoing information I attempt to integrate on a daily basis. I start every day with a now routine ritual of scanning new publications from 60+ journals and blogs using my firehose RSS feed, as well as integrating new links from various Science sub-reddits, my curated twitter cogneuro list, my friends and colleagues on Facebook, and email lists. I then in turn curate the best, most relevant to my interests, or in some cases the most outrageous of these links and share them back to twitter, facebook, reddit, and colleagues.

Of course in doing so, a frequent response from (particularly more senior) colleagues is: why?! Why do I choose to spend the time to both take in all that information and to share it back to the world? The answer is quite simple- in sharing this stuff I get critical feedback from an ever-growing network of peers and collaborators. I can’t even count the number of times someone has pointed out something (for better or worse) that I would have otherwise missed in an article or idea. That’s right, I share it so I can see what you think of it!  In this way I have been able to not only stay up to date with the latest research and concepts, but to receive constant invaluable feedback from all of you lovely brains :). In some sense I literally distribute my cognition throughout my network – thanks for the extra neurons!

From the beginning, I have been able not only to assess the impact of this stuff, but also gain deeper and more varied insights into its meaning. When I began my PhD I had the moderate statistical training of a BSc in psychology with little direct knowledge of neuroimaging methods or theory. Frankly it was bewildering. Just figuring out which methods to pay attention to, or what problems to look out for, was a headache-inducing nightmare. But I had to start somewhere and so I started by sharing, and sharing often. As a result almost every day I get amazing feedback pointing out critical insights or flaws in the things I share that I would have otherwise missed. In this way the entire world has become my interactive classroom! It is difficult to overstate the degree to which this interaction has enriched my abilities as a scientists and thinker.

It is only natural however for more senior investigators to worry about how much time one might spend on all this. I admit in the early days of my PhD I may have spent a bit too long lingering amongst the RSS trees and twitter swarms. But then again, it is difficult to place a price on the knowledge and know-how I garnered in this process (not to mention the invaluable social capital generated in building such a network!). I am a firm believer in “power procrastination”, which is just the process of regularly switching from more difficult but higher priority to more interesting but lower priority tasks. I believe that by spending my downtime taking in and sharing information, I’m letting my ‘default mode’ take a much needed rest, while still feeding it with inputs that will actually make the hard tasks easier.

In all, on a good day I’d say I spend about 20 minutes each morning taking in inputs and another 20 minutes throughout the day sharing them. Of course some days (looking at you Fridays) I don’t always adhere to that and there are those times when I have to ‘just say no’ and wait until the evening to get into that workflow. Productivity apps like Pomodoro have helped make sure I respect the balance when particularly difficult tasks arise. All in all however, the time I spend sharing is paid back tenfold in new knowledge and deeper understanding.

Really I should be thanking all of you, the invaluable peers, friends, colleagues, followers, and readers who give me the feedback that is so totally essential to my cognitive evolution. So long as you keep reading- I’ll keep sharing! Thanks!!

Notes: I haven’t even touched on the value of blogging and post-publication peer review, which of course sums with the benefits mentioned here, but also has vastly improved my writing and comprehension skills! But that’s a topic for another post!

( don’t worry, the skim-share cycle is no replacement for deep individual learning, which I also spend plenty of time doing!)

“you are a von economo neuron!” – Francesca 🙂

Fun fact – I read the excellent scifi novel Accelerando just prior to beginning my PhD. In the novel the main character is an info-addict who integrates so much information he gains a “5 second” prescience on events as they unfold. He then shares these insights for free with anyone who wants them, generating billion dollar companies (of which he owns no part in) and gradually manipulating global events to bring about a technological singularity. I guess you could say I found this to be a pretty neat character 🙂 In a serious vein though, I am a firm believer in free and open science, self-publication, and sharing-based economies. Information deserves to be free!

Will multivariate decoding spell the end of simulation theory?

Decoding techniques such as multivariate pattern analysis (MVPA) are hot stuff in cognitive neuroscience, largely because they offer a tentative promise of actually reading out the underlying computations in a region rather than merely describing data features (e.g. mean activation profiles). While I am quite new to MVPA and similar machine learning techniques (so please excuse any errors in what follows), the basic process has been explained to me as a reversal of the X and Y variables in a typical general linear model. Instead of specifying a design matrix of explanatory (X) variables and testing how well those predict a single independent (Y) variable (e.g. the BOLD timeseries in each voxel), you try to estimate an explanatory variable (essentially decoding the ‘design matrix’ that produced the observed data) from many Y variables, for example one Y variable per voxel (hence the multivariate part). The decoded explanatory variable then describes (BOLD) responses in way that can vary in space, rather than reflecting an overall data feature across a set of voxels such as mean or slope. Typically decoding analyses proceed in two steps, one in which you train the classifier on some set of voxels and another where you see how well that trained model can classify patterns of activity in another scan or task. It is precisely this ability to detect patterns in subtle spatial variations that makes MVPA an attractive technique- the GLM simply doesn’t account for such variation.

The implicit assumption here is that by modeling subtle spatial variations across a set of voxels, you can actually pick up the neural correlates of the underlying computation or representation (Weil and Rees, 2010, Poldrack, 2011). To illustrate the difference between an MVPA and GLM analysis, imagine a classical fMRI experiment where we have some set of voxels defining a region with a significant mean response to your experimental manipulation. All the GLM can tell us is that in each voxel the mean response is significantly different from zero. Each voxel within the significant region is likely to vary slightly in its actual response- you might imagine all sorts of subtle intensity variations within a significant region- but the GLM essentially ignores this variation. The exciting assumption driving interest in decoding is that this variability might actually reflect the activity of sub-populations of neurons and by extension, actual neural representations. MVPA and similar techniques are designed to pick out when these reflect a coherent pattern; once identified this pattern can be used to “predict” when the subject was seeing one or another particular stimulus. While it isn’t entirely straightforward to interpret the patterns MVPA picks out as actual ‘neural representations’, there is some evidence that the decoded models reflect a finer granularity of neural sub-populations than represented in overall mean activation profiles (Todd, 2013; Thompson 2011).

Professor Xavier applies his innate talent for MVPA.
Professor Xavier applies his innate talent for MVPA.

As you might imagine this is terribly exciting, as it presents the possibility to actually ‘read-out’ the online function of some brain area rather than merely describing its overall activity. Since the inception of brain scanning this has been exactly the (largely failed) promise of imaging- reverse inference from neural data to actual cognitive/perceptual contents. It is understandable then that decoding papers are the ones most likely to appear in high impact journals- just recently we’ve seen MVPA applied to dream states, reconstruction of visual experience, and pain experience all in top journals (Kay et al., 2008, Horikawa et al., 2013, Wager et al., 2013). I’d like to focus on that last one for the remainer of this post, as I think we might draw some wide-reaching conclusions for theoretical neuroscience as a whole from Wager et al’s findings.

Francesca and I were discussing the paper this morning- she’s working on a commentary for a theoretical paper concerning the role of the “pain matrix” in empathy-for-pain research. For those of you not familiar with this area, the idea is a basic simulation-theory argument-from-isomorphism. Simulation theory (ST) is just the (in)famous idea that we use our own motor system (e.g. mirror neurons) to understand the gestures of others. In a now infamous experiment Rizzolatti et al showed that motor neurons in the macaque monkey responded equally to their own gestures or the gestures of an observed other (Rizzolatti and Craighero, 2004). They argued that this structural isomorphism might represent a general neural mechanism such that social-cognitive functions can be accomplished by simply applying our own neural apparatus to work out what was going on for the external entity. With respect to phenomena such empathy for pain and ‘social pain’ (e.g. viewing a picture of someone you broke up with recently), this idea has been extended to suggest that, since a region of networks known as “the pain matrix” activates similarly when we are in pain or experience ‘social pain’, that we “really feel” pain during these states (Kross et al., 2011) [1].

In her upcoming commentary, Francesca points out an interesting finding in the paper by Wager and colleagues that I had overlooked. Wager et al apply a decoding technique in subjects undergoing painful and non-painful stimulation. Quite impressively they are then able to show that the decoded model predicts pain intensity in different scanners and various experimental manipulations. However they note that the model does not accurately predict subject’s ‘social pain’ intensity, even though the subjects did activate a similar network of regions in both the physical and social pain tasks (see image below). One conclusion from these findings it that it is surely premature to conclude that because a group of subjects may activate the same regions during two related tasks, those isomorphic activations actually represent identical neural computations [2]. In other words, arguments from structural isomorpism like ST don’t provide any actual evidence for the mechanisms they presuppose.

Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for 'social pain' (rejecter vs friend), although that contrast picks out similar regions of the 'pain matrix'.
Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for ‘social pain’ (rejecter vs friend), although that contrast picks out similar regions of the ‘pain matrix’.

To me this is exactly the right conclusion to take from Wager et al and similar decoding papers. To the extent that the assumption that MVPA identifies patterns corresponding to actual neural representations holds, we are rapidly coming to realize that a mere mean activation profile tells us relatively little about the underlying neural computations [3]. It certainly does not tell us enough to conclude much of anything on the basis that a group of subjects activate “the same brain region” for two different tasks. It is possible and even likely that just because I activate my motor cortex when viewing you move, I’m doing something quite different with those neurons than when I actually move about. And perhaps this was always the problem with simulation theory- it tries to make the leap from description (“similar brain regions activate for X and Y”) to mechanism, without actually describing a mechanism at all. I guess you could argue that this is really just a much fancier argument against reverse inference and that we don’t need MVPA to do away with simulation theory. I’m not so sure however- ST remains a strong force in a variety of domains. If decoding can actually do away with ST and arguments from isomorphism or better still, provide a reasonable mechanism for simulation, it’ll be a great day in neuroscience. One thing is clear- model based approaches will continue to improve cognitive neuroscience as we go beyond describing what brain regions activate during a task to actually explaining how those regions work together to produce behavior.

I’ve curated some enlightening responses to this post in a follow-up – worth checking for important clarifications and extensions! See also the comments on this post for a detailed explanation of MVPA techniques. 

References

Horikawa T, Tamaki M, Miyawaki Y, Kamitani Y (2013) Neural Decoding of Visual Imagery During Sleep. Science.

Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity. Nature 452:352-355.

Kross E, Berman MG, Mischel W, Smith EE, Wager TD (2011) Social rejection shares somatosensory representations with physical pain. Proceedings of the National Academy of Sciences 108:6270-6275.

Poldrack RA (2011) Inferring mental states from neuroimaging data: from reverse inference to large-scale decoding. Neuron 72:692-697.

Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169-192.

Thompson R, Correia M, Cusack R (2011) Vascular contributions to pattern analysis: Comparing gradient and spin echo fMRI at 3T. Neuroimage 56:643-650.

Todd MT, Nystrom LE, Cohen JD (2013) Confounds in Multivariate Pattern Analysis: Theory and Rule Representation Case Study. NeuroImage.

Wager TD, Atlas LY, Lindquist MA, Roy M, Woo C-W, Kross E (2013) An fMRI-Based Neurologic Signature of Physical Pain. New England Journal of Medicine 368:1388-1397.

Weil RS, Rees G (2010) Decoding the neural correlates of consciousness. Current opinion in neurology 23:649-655.


[1] Interestingly this paper comes from the same group (Wager et al) showing that pain matrix activations do NOT predict ‘social’ pain. It will be interesting to see how they integrate this difference.

[2] Nevermind the fact that the ’pain matrix’ is not specific for pain.

[3] With all appropriate caveats regarding the ability of decoding techniques to resolve actual representations rather than confounding individual differences (Todd et al., 2013) or complex neurovascular couplings (Thompson et al., 2011).