Will multivariate decoding spell the end of simulation theory?

Decoding techniques such as multivariate pattern analysis (MVPA) are hot stuff in cognitive neuroscience, largely because they offer a tentative promise of actually reading out the underlying computations in a region rather than merely describing data features (e.g. mean activation profiles). While I am quite new to MVPA and similar machine learning techniques (so please excuse any errors in what follows), the basic process has been explained to me as a reversal of the X and Y variables in a typical general linear model. Instead of specifying a design matrix of explanatory (X) variables and testing how well those predict a single independent (Y) variable (e.g. the BOLD timeseries in each voxel), you try to estimate an explanatory variable (essentially decoding the ‘design matrix’ that produced the observed data) from many Y variables, for example one Y variable per voxel (hence the multivariate part). The decoded explanatory variable then describes (BOLD) responses in way that can vary in space, rather than reflecting an overall data feature across a set of voxels such as mean or slope. Typically decoding analyses proceed in two steps, one in which you train the classifier on some set of voxels and another where you see how well that trained model can classify patterns of activity in another scan or task. It is precisely this ability to detect patterns in subtle spatial variations that makes MVPA an attractive technique- the GLM simply doesn’t account for such variation.

The implicit assumption here is that by modeling subtle spatial variations across a set of voxels, you can actually pick up the neural correlates of the underlying computation or representation (Weil and Rees, 2010, Poldrack, 2011). To illustrate the difference between an MVPA and GLM analysis, imagine a classical fMRI experiment where we have some set of voxels defining a region with a significant mean response to your experimental manipulation. All the GLM can tell us is that in each voxel the mean response is significantly different from zero. Each voxel within the significant region is likely to vary slightly in its actual response- you might imagine all sorts of subtle intensity variations within a significant region- but the GLM essentially ignores this variation. The exciting assumption driving interest in decoding is that this variability might actually reflect the activity of sub-populations of neurons and by extension, actual neural representations. MVPA and similar techniques are designed to pick out when these reflect a coherent pattern; once identified this pattern can be used to “predict” when the subject was seeing one or another particular stimulus. While it isn’t entirely straightforward to interpret the patterns MVPA picks out as actual ‘neural representations’, there is some evidence that the decoded models reflect a finer granularity of neural sub-populations than represented in overall mean activation profiles (Todd, 2013; Thompson 2011).

Professor Xavier applies his innate talent for MVPA.
Professor Xavier applies his innate talent for MVPA.

As you might imagine this is terribly exciting, as it presents the possibility to actually ‘read-out’ the online function of some brain area rather than merely describing its overall activity. Since the inception of brain scanning this has been exactly the (largely failed) promise of imaging- reverse inference from neural data to actual cognitive/perceptual contents. It is understandable then that decoding papers are the ones most likely to appear in high impact journals- just recently we’ve seen MVPA applied to dream states, reconstruction of visual experience, and pain experience all in top journals (Kay et al., 2008, Horikawa et al., 2013, Wager et al., 2013). I’d like to focus on that last one for the remainer of this post, as I think we might draw some wide-reaching conclusions for theoretical neuroscience as a whole from Wager et al’s findings.

Francesca and I were discussing the paper this morning- she’s working on a commentary for a theoretical paper concerning the role of the “pain matrix” in empathy-for-pain research. For those of you not familiar with this area, the idea is a basic simulation-theory argument-from-isomorphism. Simulation theory (ST) is just the (in)famous idea that we use our own motor system (e.g. mirror neurons) to understand the gestures of others. In a now infamous experiment Rizzolatti et al showed that motor neurons in the macaque monkey responded equally to their own gestures or the gestures of an observed other (Rizzolatti and Craighero, 2004). They argued that this structural isomorphism might represent a general neural mechanism such that social-cognitive functions can be accomplished by simply applying our own neural apparatus to work out what was going on for the external entity. With respect to phenomena such empathy for pain and ‘social pain’ (e.g. viewing a picture of someone you broke up with recently), this idea has been extended to suggest that, since a region of networks known as “the pain matrix” activates similarly when we are in pain or experience ‘social pain’, that we “really feel” pain during these states (Kross et al., 2011) [1].

In her upcoming commentary, Francesca points out an interesting finding in the paper by Wager and colleagues that I had overlooked. Wager et al apply a decoding technique in subjects undergoing painful and non-painful stimulation. Quite impressively they are then able to show that the decoded model predicts pain intensity in different scanners and various experimental manipulations. However they note that the model does not accurately predict subject’s ‘social pain’ intensity, even though the subjects did activate a similar network of regions in both the physical and social pain tasks (see image below). One conclusion from these findings it that it is surely premature to conclude that because a group of subjects may activate the same regions during two related tasks, those isomorphic activations actually represent identical neural computations [2]. In other words, arguments from structural isomorpism like ST don’t provide any actual evidence for the mechanisms they presuppose.

Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for 'social pain' (rejecter vs friend), although that contrast picks out similar regions of the 'pain matrix'.
Figure from Wager et al demonstrating specificity of classifier for pain vs warmth and pain vs rejection. Note poor receiver operating curve (ROC) for ‘social pain’ (rejecter vs friend), although that contrast picks out similar regions of the ‘pain matrix’.

To me this is exactly the right conclusion to take from Wager et al and similar decoding papers. To the extent that the assumption that MVPA identifies patterns corresponding to actual neural representations holds, we are rapidly coming to realize that a mere mean activation profile tells us relatively little about the underlying neural computations [3]. It certainly does not tell us enough to conclude much of anything on the basis that a group of subjects activate “the same brain region” for two different tasks. It is possible and even likely that just because I activate my motor cortex when viewing you move, I’m doing something quite different with those neurons than when I actually move about. And perhaps this was always the problem with simulation theory- it tries to make the leap from description (“similar brain regions activate for X and Y”) to mechanism, without actually describing a mechanism at all. I guess you could argue that this is really just a much fancier argument against reverse inference and that we don’t need MVPA to do away with simulation theory. I’m not so sure however- ST remains a strong force in a variety of domains. If decoding can actually do away with ST and arguments from isomorphism or better still, provide a reasonable mechanism for simulation, it’ll be a great day in neuroscience. One thing is clear- model based approaches will continue to improve cognitive neuroscience as we go beyond describing what brain regions activate during a task to actually explaining how those regions work together to produce behavior.

I’ve curated some enlightening responses to this post in a follow-up – worth checking for important clarifications and extensions! See also the comments on this post for a detailed explanation of MVPA techniques. 

References

Horikawa T, Tamaki M, Miyawaki Y, Kamitani Y (2013) Neural Decoding of Visual Imagery During Sleep. Science.

Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity. Nature 452:352-355.

Kross E, Berman MG, Mischel W, Smith EE, Wager TD (2011) Social rejection shares somatosensory representations with physical pain. Proceedings of the National Academy of Sciences 108:6270-6275.

Poldrack RA (2011) Inferring mental states from neuroimaging data: from reverse inference to large-scale decoding. Neuron 72:692-697.

Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169-192.

Thompson R, Correia M, Cusack R (2011) Vascular contributions to pattern analysis: Comparing gradient and spin echo fMRI at 3T. Neuroimage 56:643-650.

Todd MT, Nystrom LE, Cohen JD (2013) Confounds in Multivariate Pattern Analysis: Theory and Rule Representation Case Study. NeuroImage.

Wager TD, Atlas LY, Lindquist MA, Roy M, Woo C-W, Kross E (2013) An fMRI-Based Neurologic Signature of Physical Pain. New England Journal of Medicine 368:1388-1397.

Weil RS, Rees G (2010) Decoding the neural correlates of consciousness. Current opinion in neurology 23:649-655.


[1] Interestingly this paper comes from the same group (Wager et al) showing that pain matrix activations do NOT predict ‘social’ pain. It will be interesting to see how they integrate this difference.

[2] Nevermind the fact that the ’pain matrix’ is not specific for pain.

[3] With all appropriate caveats regarding the ability of decoding techniques to resolve actual representations rather than confounding individual differences (Todd et al., 2013) or complex neurovascular couplings (Thompson et al., 2011).

Quick post – Dan Dennett’s Brain talk on Free Will vs Moral Responsibility

As a few people have asked me to give some impression of Dan’s talk at the FIL Brain meeting today, i’m just going to jot my quickest impressions before I run off to the pub to celebrate finishing my dissertation today. Please excuse any typos as what follows is unedited! Dan gave a talk very similar to his previous one several months ago at the UCL philosophy department. As always Dan gave a lively talk with lots of funny moments and appeals to common sense. Here the focus was more on the media activities of neuroscientists, with some particularly funny finger wagging at Patrick Haggard and Chris Frith. Some good bits where his discussion of evidence that priming subjects against free will seems to make them more likely to commit immoral acts (cheating, stealing) and a very firm statement that neuroscience is being irresponsible complete with bombastic anti-free will quotes by the usual suspects. Although I am a bit rusty on the mechanics of the free will debate, Dennett essentially argued for a compatiblist  view of free will and determinism. The argument goes something like this: the basic idea that free will is incompatible with determinism comes from a mythology that says in order to have free will, an agent must be wholly unpredictable. Dennett argues that this is absurd, we only need to be somewhat unpredictable. Rather than being perfectly random free agents, Dennett argues that what really matters is moral responsibility pragmatically construed.  Dennett lists a “spec sheet” for constructing a morally responsible agent including “could have done otherwise, is somewhat unpredictable, acts for reasons, is subject to punishment…”. In essence Dan seems to be claiming that neuroscientists don’t really care about “free will”, rather we care about the pragmatic limits within which we feel comfortable entering into legal agreements with an agent. Thus the job of the neuroscientists is not to try to reconcile the folk and scientific views of “free will”, which isn’t interesting (on Dennett’s acocunt) anyway, but rather to describe the conditions under which an agent can be considered morally responsible. The take home message seemed to be that moral responsibility is essentially a political rather than metaphysical construct. I’m afraid I can’t go into terrible detail about the supporting arguments- to be honest Dan’s talk was extremely short on argumentation. The version he gave to the philosophy department was much heavier on technical argumentation, particularly centered around proving that compatibilism doesn’t contradict with “it could have been otherwise”. In all the talk was very pragmatic, and I do agree with the conclusions to some degree- that we ought to be more concerned with the conditions and function of “will” and not argue so much about the meta-physics of “free”. Still my inner philosopher felt that Dan is embracing some kind of basic logical contradiction and hand-waving it away with funny intuition pumps, which for me are typically unsatisfying.

For reference, here is the abstract of the talk:

Nothing—yet—in neuroscience shows we don’t have free will

Contrary to the recent chorus of neuroscientists and psychologists declaring that free will is an illusion, I’ll be arguing (not for the first time, but with some new arguments and considerations) that this familiar claim is so far from having been demonstrated by neuroscience that those who advance it are professionally negligent, especially given the substantial social consequences of their being believed by lay people. None of the Libet-inspired work has the drastic implications typically adduced, and in fact the Soon et al (2008) work, and its descendants, can be seen to demonstrate an evolved adaptation to enhance our free will, not threaten it. Neuroscientists are not asking the right questions about free will—or what we might better call moral competence—and once they start asking and answering the right questions we may discover that the standard presumption that all “normal” adults are roughly equal in moral competence and hence in accountability is in for some serious erosion. It is this discoverable difference between superficially similar human beings that may oblige us to make major revisions in our laws and customs. Do we human beings have free will? Some of us do, but we must be careful about imposing the obligations of our good fortune on our fellow citizens wholesale.

Enactive Bayesians? Response to “the brain as an enactive system” by Gallagher et al

Shaun Gallagher has a short new piece out with Hutto, Slaby, and Cole and I felt compelled to comment on it. Shaun was my first mentor and is to thank for my understanding of what is at stake in a phenomenological cognitive science. I jumped on this piece when it came out because, as I’ve said before, enactivists often  leave a lot to be desired when talking about the brain. That is to say, they more often than not leave it out entirely and focus instead on bodies, cultural practices, and other parts of our extra-neural milieu. As a neuroscientist who is enthusiastically sympathetic to the embodied, enactive approach to cognition, I find this worrisome. Which is to say that when I’ve tried to conduct “neurophenomenological” experiments, I often feel a bit left in the rain when it comes time construct, analyze, and interpret the data.

As an “enactive” neuroscientist, I often find the de-emphasis of brains a bit troubling. For one thing, the radically phenomenological crew tends to make a lot of claims to altering the foundations of neuroscience. Things like information processing and mental representation are said to be stale, Cartesian constructs that lack ontological validity and want to be replaced. This is fine- I’m totally open to the limitations of our current explanatory framework. However as I’ve argued here, I believe neuroscience still has great need of these tools and that dynamical systems theory is not ready for prime time neuroscience. We need a strong positive account of what we should replace them with, and that account needs to act as a practical and theoretical guide to discovery.

One worry I have is that enactivism quickly begins to look like a constructivist version of behaviorism, focusing exclusively on behavior to the exclusion of the brain. Of course I understand that this is a bit unfair; enactivism is about taking a dynamical, encultured, phenomenological view of the human being seriously. Yet I believe to accomplish this we must also understand the function of the nervous system. While enactivists will often give token credit to the brain- affirming that is indeed an ‘important part’ of the cognitive apparatus, they seem quick to value things like clothing and social status over gray matter. Call me old fashioned but, you could strip me of job, titles, and clothing tomorrow and I’d still be capable of 80% of whatever I was before. Granted my cognitive system would undergo a good deal of strain, but I’d still be fully capable of vision, memory, speech, and even consciousness. The same can’t be said of me if you start magnetically stimulating my brain in interesting and devious ways.

I don’t want to get derailed arguing about the explanatory locus of cognition, as I think one’s stances on the matter largely comes down to whatever your intuitive pump tells you is important.  We could argue about it all day; what matters more than where in the explanatory hierarchy we place the brain, is how that framework lets us predict and explain neural function and behavior. This is where I think enactivism often fails; it’s all fire and bluster (and rightfully so!) when it comes to the philosophical weaknesses of empirical cognitive science, yet mumbles and missteps when it comes to giving positive advice to scientists. I’m all for throwing out the dogma and getting phenomenological, but only if there’s something useful ready to replace the methodological bathwater.

Gallagher et al’s piece starts:

 “… we see an unresolved tension in their account. Specifically, their questions about how the brain functions during interaction continue to reflect the conservative nature of ‘normal science’ (in the Kuhnian sense), invoking classical computational models, representationalism, localization of function, etc.”

This is quite true and an important tension throughout much of the empirical work done under the heading of enactivism. In my own group we’ve struggled to go from the inspiring war cries of anti-representationalism and interaction theory to the hard constraints of neuroscience. It often happens that while the story or theoretical grounding is suitably phenomenological and enactive, the methodology and their interpretation are necessarily cognitivist in nature.

Yet I think this difficulty points to the more difficult task ahead if enactivism is to succeed. Science is fundamentally about methodology, and methodology reflects and is constrained by one’s ontological/explanatory framework. We measure reaction times and neural signal lags precisely because we buy into a cognitivist framework of cognition, which essentially argues for computations that take longer to process with increasing complexity, recruiting greater neural resources. The catch is, without these things it’s not at all clear how we are to construct, analyze, and interpret our data.  As Gallagher et al correctly point out, when you set out to explain behavior with these tools (reaction times and brain scanners), you can’t really claim to be doing some kind of radical enactivism:

 “Yet, in proposing an enactive interpretation of the MNS Schilbach et al. point beyond this orthodox framework to the possibility of rethinking, not just the neural correlates of social cognition, but the very notion of neural correlate, and how the brain itself works.”

We’re all in agreement there: I want nothing more than to understand exactly how it is our cerebral organ accomplishes the impressive feats of locomotion, perception, homeostasis, and so on right up to consciousness and social cognition. Yet I’m a scientist and no matter what I write in my introduction I must measure something- and what I measure largely defines my explanatory scope. So what do Gallagher et al offer me?

 “The enactive interpretation is not simply a reinterpretation of what happens extra-neurally, out in the intersubjective world of action where we anticipate and respond to social affordances. More than this, it suggests a different way of conceiving brain function, specifically in non-representational, integrative and dynamical terms (see e.g., Hutto and Myin, in press).”

Ok, so I can’t talk about representations. Presumably we’ll call them “processes” or something like that. Whatever we call them, neurons are still doing something, and that something is important in producing behavior. Integrative- I’m not sure what that means, but I presume it means that whatever neurons do, they do it across sensory and cognitive modalities. Finally we come to dynamical- here is where it gets really tricky. Dynamical systems theory (DST) is an incredibly complex mathematical framework dealing with topology, fluid dynamics, and chaos theory. Can DST guide neuroscientific discovery?

This is a tough question. My own limited exposure to DST prevents me from making hard conclusions here. For now let’s set it aside- we’ll come back to this in a moment. First I want to get a better idea of how Gallagher et al characterize contemporary neuroscience, the source of this tension in Schillbach et al:

Functional MRI technology goes hand in hand with orthodox computational models. Standard use of fMRI provides an excellent tool to answer precisely the kinds of questions that can be asked within this approach. Yet at the limits of this science, a variety of studies challenge accepted views about anatomical and functional segregation (e.g., Shackman et al. 2011; Shuler and Bear 2006), the adequacy of short-term task- based fMRI experiments to provide an adequate conception of brain function (Gonzalez-Castillo et al. 2012), and individual differences in BOLD signal activation in subjects performing the same cognitive task (Miller et al. 2012). Such studies point to embodied phenomena (e.g., pain, emotion, hedonic aspects) that are not appropriately characterized in representational terms but are dynamically integrated with their central elaboration.

Claim one is what I’ve just argued above, that fMRI and similar tools presuppose computational cognitivism. What follows I feel is a mischaracterization of cognitive neuroscience. First we have the typical bit about functional segregation being extremely limited. It surely is and I think most neuroscientists today would agree that segregation is far from the whole story of the brain. Which is precisely why the field is undeniably and swiftly moving towards connectivity and functional integration, rather than segregation. I’d wager that for a few years now the majority of published cogneuro papers focus on connectivity rather than blobology.

Next we have a sort of critique of the use of focal cognitive tasks. This almost seems like a critique of science itself; while certainly not without limits, neuroscientists rely on such tasks in order to make controlled assessments of phenomena. There is nothing a priori that says a controlled experiment is necessarily cognitivist anymore so than a controlled physics experiment must necessarily be Newtonian rather than relativistic. And again, I’d characterize contemporary neuroscience as being positively in love with “task-free” resting state fMRI. So I’m not sure at what this criticism is aimed.

Finally there is this bit about individual differences in BOLD activation. This one I think is really a red herring; there is nothing in fMRI methodology that prevents scientists from assessing individual differences in neural function and architecture. The group I’m working with in London specializes in exactly this kind of analysis, which is essentially just creating regression models with neural and behavioral independent and dependent variables. There certainly is a lot of variability in brains, and neuroscience is working hard and making strides towards understanding those phenomena.

 “Consider also recent challenges to the idea that so-called “mentalizing” areas (“cortical midline structures”) are dedicated to any one function. Are such areas activated for mindreading (Frith and Frith 2008; Vogeley et al. 2001), or folk psychological narrative (Perner et al. 2006; Saxe & Kanwisher 2003); a default mode (e.g., Raichle et al. 2001), or other functions such as autobiographical memory, navigation, and future planning (see Buckner and Carroll 2006; 2007; Spreng, Mar and Kim 2008); or self -related tasks(Northoff & Bermpohl 2004); or, more general reflective problem solving (Legrand andRuby 2010); or are they trained up for joint attention in social interaction, as Schilbach etal. suggest; or all of the above and others yet to be discovered.

I guess this paragraph is supposed to get us thinking that these seem really different, so clearly the localizationist account of the MPFC fails. But as I’ve just said, this is for one a bit of a red herring- most neuroscientists no longer believe exclusively in a localizationist account. In fact more and more I hear top neuroscientists disparaging overly blobological accounts and referring to prefrontal cortex as a whole. Functional integration is here to stay. Further, I’m not sure I buy their argument that these functions are so disparate- it seems clear to me that they all share a social, self-related core probably related to the default mode network.

Finally, Gallagher and company set out to define what we should be explaining- behavior as “a dynamic relation between organisms, which include brains, but also their own structural features that enable specific perception-action loops involving social and physical environments, which in turn effect statistical regularities that shape the structure of the nervous system.” So we do want to explain brains, but we want to understand that their setting configures both neural structure and function. Fair enough, I think you would be hard pressed to find a neuroscientist who doesn’t agree that factors like environment and physiology shape the brain. [edit: thanks to Bryan Patton for pointing out in the comments that Gallagher’s description of behavior here is strikingly similar to accounts given by Friston’s Free Energy Principle predictive coding account of biological organisms]

Gallagher asks then, “what do brains do in the complex and dynamic mix of interactions that involve full-out moving bodies, with eyes and faces and hands and voices; bodies that are gendered and raced, and dressed to attract, or to work or play…?” I am glad to see that my former mentor and I agree at least on the question at stake, which seems to be, what exactly is it brains do? And we’re lucky in that we’re given an answer by Gallagher et al:

“The answer is that brains are part of a system, along with eyes and face and hands and voice, and so on, that enactively anticipates and responds to its environment.”

 Me reading this bit: “yep, ok, brains, eyeballs, face, hands, all the good bits. Wait- what?” The answer is “… a system that … anticipates and responds to its environment.” Did Karl Friston just enter the room? Because it seems to me like Gallagher et al are advocating a predictive coding account of the brain [note: see clarifying comment by Gallagher, and my response below]! If brains anticipate their environment then that means they are constructing a forward model of their inputs. A forward model is a Bayesian statistical model that estimates posterior probabilities of a stimulus from prior predictions about its nature. We could argue all day about what to call that model, but clearly what we’ve got here is a brain using strong internal models to make predictions about the world. Now what is “enactive” about these forward models seems like an extremely ambiguous notion.

To this extent, Gallagher includes “How an agent responds will depend to some degree on the overall dynamical state of the brain and the various, specific and relevant neuronal processes that have been attuned by evolutionary pressures, but also by personal experiences” as a description of how a prediction can be enactive. But none of this is precluded by the predictive coding account of the brain. The overall dynamical state (intrinsic connectivity?) of the brain amounts to noise that must be controlled through increasing neural gain and precision. I.e., a Bayesian model presupposes that the brain is undergoing exactly these kinds of fluctuations and makes steps to produce optimal behavior in the face of such noise.

Likewise the Bayesian model is fully hierarchical- at all levels of the system the local neural function is constrained and configured by predictions and error signals from the levels above and below it. In this sense, global dynamical phenomena like neuromodulation structure prediction in ways that constrain local dynamics.  These relationships can be fully non-linear and dynamical in nature (See Friston 2009 for review). Of the other bits –  evolution and individual differences, Karl would surely say that the former leads to variation in first priors and the latter is the product of agents optimizing their behavior in a variable world.

So there you have it- enactivist cognitive neuroscience is essentially Bayesian neuroscience. If I want to fulfill Gallagher et al’s prescriptions, I need merely use resting state, connectivity, and predictive coding analysis schemes. Yet somehow I think this isn’t quite what they meant- and there for me, lies the true tension in ‘enactive’ cognitive neuroscience. But maybe it is- Andy Clark recently went Bayesian, claiming that extended cognition and predictive coding are totally compatible. Maybe it’s time to put away the knives and stop arguing about representations. Yet I think an important tension remains: can we explain all the things Gallagher et al list as important using prior and posterior probabilities? I’m not totally sure, but I do know one thing- these concepts make it a hell of a lot easier to actually analyze and interpret my data.

fake edit:

I said I’d discuss DST, but ran out of space and time. My problem with DST boils down to this: it’s descriptive, not predictive. As a scientist it is not clear to me how one actually applies DST to a given experiment. I don’t see any kind of functional ontology emerging by which to apply the myriad of DST measures in a principled way. Mental chronometry may be hokey and old fashioned, but it’s easy to understand and can be applied to data and interpreted readily. This is a huge limitation for a field as complex as neuroscience, and as rife with bad data. A leading dynamicist once told me that in his entire career “not one prediction he’d made about (a DST measure/experiment) had come true, and that to apply DST one just needed to “collect tons of data and then apply every measure possible until one seemed interesting”. To me this is a data fishing nightmare and does not represent a reliable guide to empirical discovery.

What are the critical assumptions of neuroscience?

In light of all the celebration surrounding the discovery of a Higgs-like particle, I found it amusing that nearly 30 years ago Higg’s theory was rejected by CERN as ‘outlandish’. This got me to wondering, just how often is scientific consensus a bar to discovery? Scientists are only human, and as such can be just as prone to blindspots, biases, and herding behaviors as other humans. Clearly the scientific method and scientific consensus (e.g. peer review) are the tools we rely on to surmount these biases. Yet, every tool has it’s misuse, and sometimes the wisdom of the crowd is just the aggregate of all these biases.

At this point, David Zhou pointed out that when scientific consensus leads to rejection of correct viewpoints, it’s often due to the strong implicit assumptions that the dominant paradigm rests upon. Sometimes there are assumptions that support our theories which, due to a lack of either conceptual or methodological sophistication, are not amenable to investigation. Other times we simply don’t see them; when Chomsky famously wrote his review of Skinner’s verbal behavior, he simply put together all the pieces of the puzzle that were floating around, and in doing so destroyed a 20-year scientific consensus.

Of course, as a cognitive scientist studying the brain, I often puzzle over what assumptions I critically depend upon to do my work. In an earlier stage of my training, I was heavily inundated with ideas from the “embodied, enactive, extended” framework, where it is common to claim that the essential bias is an uncritical belief in the representational theory of mind. While I do still see problems in mainstream information theory, I’m no longer convinced that an essentially internalist, predictive-coding account of the brain is without merit. It seems to me that the “revolution” of externalist viewpoints turned out to be more of an exercise in house-keeping, moving us beyond overly simplistic “just-so” evolutionary diatribes,and  empty connectionism, to introducing concepts from dynamical systems to information theory in the context of cognition.

So, really i’d like to open this up: what do you think are assumptions neuroscientists cannot live without? I don’t want to shape the discussion too much, but here are a few starters off the top of my head:

  • Nativism: informational constraints are heritable and innate, learning occurs within these bounds
  • Representation: Physical information is transduced by the senses into abstract representations for cognition to manipulate
  • Frequentism: While many alternatives currently abound, for the most part I think many mainstream neuroscientists are crucially dependent on assessing differences in mean and slope. A related issue is a tendency to view variability as “noise”
  • Mental Chronometry: related to the representational theory of mind is the idea that more complex representations take longer to process and require more resources. Thus greater (BOLD/ERP/RT) equals a more complex process.
  • Evolution: for a function to exist it must be selected for by evolutionary natural selection

That’s all off the top of my head. What do you think? Are these essential for neuroscience? What might a cognitive theory look like without these, and how could it motivate empirical research? For me, each of these are in some way quite helpful in terms of providing a framework to interpret reaction-time, BOLD, or other cognition related data. Have I missed any?

The 2011 Mind & Life Summer Research Institute: Are Monks Better at Introspection?

As I’m sitting in the JFK airport waiting for my flight to Iceland, I can’t help but let my mind wander over the curious events of this year’s summer research institute (SRI). The Mind & Life Institute, an organization dedicated to the integration and development of what they’ve dubbed “contemplative science”, holds the SRI each summer to bring together clinicians, neuroscientists, scholars, and contemplatives (mostly monks) in a format that is half conference and half meditation retreat. The summer research institute is always a ton of fun, and a great place to further one’s understanding of Buddhism & meditation while sharing valuable research insights.

I was lucky enough to receive a Varela award for my work in meta-cognition and mental training and so this was my second year attending. I chose to take a slightly different approach from my first visit, when I basically followed the program and did whatever the M&L thought was the best use of my time. This meant lots of meditation- more than two hours per day not including the whole-day, silent “mini-retreat”. While I still practiced this year, I felt less obliged to do the full program, and I’m glad I took this route as it provided me a somewhat more detached, almost outsider view of the spectacle that is the Mind & Life SRI.

When I say spectacle, it’s important to understand how unconventional of a conference setting the SRI really is. Each year dozens of ambitious neuroscientists and clinicians meet with dozens of Buddhist monks and western “mindfulness” instructors. The initial feeling is one of severe culture clash; ambitious young scholars who can hardly wait to mention their Ivy League affiliations meet with the austere and almost ascetic approach of traditional Buddhist philosophy. In some cases it almost feels like a race to “out-mindful” one another, as folks put on a great show of piety in order to fit into the mood of the event. It can be a bit jarring to oscillate between the supposed tranquility and selflessness of mindfulness with the unabashed ambition of these highly talented and motivated folk- at least until things settle down a bit.

Nonetheless, the overall atmosphere of the SRI is one of serenity and scholarship. It’s an extremely fun, stimulating event, rich with amazingly talented yoga and meditation instructors and attended by the top researchers within the field. What follows is thus not meant as any kind of attack on the overall mission of the M&L. Indeed, I’m more than grateful to the institute for carrying on at least some form of Francisco Varela’s vision for cognitive science, and of course for supporting my own meditation research. With that being said, we turn to the question at hand: are monks objectively better at introspection? The answer for nearly everyone at the SRI appear to be “yes”, regardless of the scarcity of data suggesting this to be the case.

Enactivism and Francisco Varela

Francisco Varela, founder of EnactivismBefore I can really get into this issue, I need to briefly review what exactly “enactivism” is and how it relates to the SRI. The Mind & Life institute was co-founded by Francisco Varela, a Chilean biologist and neuroscientist who is often credited with the birth and success of embodied and enactive cognitive science. Varela had a profound impact on scientists, philosophers, and cognitive scientists and is a central influence in my own theoretical work. Varela’s essential thesis was outlined in his book “The Embodied Mind”, in which Varela, Thompson, and Rosch, attempted to outline a new paradigm for the study of mind. In the book, Varela et al rely on examples from cross-cultural psychology, continental phenomenology, Buddhism, and cognitive science to argue that cognition (and mind) is essentially an embodied, enactive phenomenon. The book has since spawned a generation of researchers dedicated in some way to the idea that cognition is not essentially, or at least foundationally, computational and representational in form.

I don’t here intend to get into the roots of what enactivism is; for the present it suffices to say that enactivism as envisioned by Varela involved a dedication to the “middle way” in which idealism-objectivism duality is collapsed in favor of a dynamical non-representational account of cognition and the world. I very much favor this view and try to use it productively in my own research. Varela argued throughout his life that cognition was not essentially an internal, info-processing kind of phenomenon, but rather an emergent and intricately interwoven entity that arose from our history of structural coupling with the world.  He further argued that cognitive science needed to develop a first-person methodology if it was to fully explain the rich panorama of human consciousness.

A simpler way to put this is to say that Varela argued persuasively that minds are not computers “parachuted into an objective world” and that cognition is not about sucking up impoverished information for representation in a subjective format. While Varela invoked much of Buddhist ontology, including concepts of “emptiness” and “inter-relatedness”, to develop his account continental phenomenologists like Heidegger and Merleau-Ponty also heavily inspired his vision of 4th wave cognitive science.  At the SRI there is little mention of this; most scholars are unaware of the continental literature or that phenomenology is not equal to introspection. Indeed I had to cringe when one to-be-unnamed young scientist declared a particular spinal pathway to be “the central pathway for embodiment”.

This is a stark misunderstanding of just what embodiment means, and I would argue renders it a relatively trivial add-on to the information processing paradigm- something most enactivists would like to strongly resist. I politely pointed the gentleman to the example work of Ulric Neisser, who argued for the ecological embodied self, in which the structure of the face is said to pre-structure human experience in particular ways, i.e. we typically experience ourselves as moving through the world toward a central fovea-centered point. Embodiment is an external, or pre-noetic structuring of the mind; it follows no nervous pathway but rather structures the possibilities of the nervous system and mind. I hope he follows that reference down the rabbit hole of the full possibilities of embodiment- the least of which is body-as-extra-module.

Still, I certainly couldn’t blame this particular scientist for his mis-understanding; nearly everyone at the SRI is totally unfamiliar with externalist/phenomenal perspectives, which is a sad state of affairs for a generation of scientists being supported by grants in Varela’s name. Regardless of Varela’s vision for cognitive science, his thesis regarding introspectionism is certainly running strong: first-person methodologies are the hot topic of the SRI, and nearly everyone agreed that by studying contemplative practitioners’ subjective reports, we’d gain some special insight into the mind.  Bracketing whether introspection is what Varela really meant by neurophenomenology (I don’t think it is- phenomenology is not introspection) we are brought to the central question: are Buddhist practitioners expert introspectionists?

Expertise and Introspectionism

Expert introspectionists?Varela certainly believed this to some degree. It’s not entirely clear to me that the bulk of Varela’s work summates to this maxim, but it’s at least certainly true that in papers such as his seminal “Neurophenomenology: a methodological remedy to the hard problem?” he argued that a careful first-person methodology could reap great benefits in this arena. Varela later followed up this theoretical thesis with his now well-known experiment conducted with then PhD student and my current mentor Antoine Lutz.

While I won’t reproduce the details of this experiment at length here, Lutz and Varela demonstrated that it was in fact possible to inform and constrain electrophysiological measurements through the collection and systemization of first-person reports. It’s worth noting here that the participants in this experiment were every day folks, not meditation practitioners, and that Lutz & Varela developed a special method to integrate the reports rather than taking them simply at face value. In fact, while Varela did often suggest that we might through careful contemplation and collaboration with the Buddhist tradition refine first person methodologies and gain insight into the “hard-problem”, he never did complete these experiments with practitioners, a fact that can likely be attributed to his pre-mature death at the hand of aggressive hepatitis.

Regardless of Varela’s own work, it’s fascinating to me that at today’s SRI, if there is one thing nearly everyone seems to explicitly agree on, it’s that meditation practitioners have some kind of privileged access to experience. I can’t count how many discussions seemed to simply assume the truth of this, regardless of the fact that almost no empirical research has demonstrated any kind of increased meta-cognitive capacity or accuracy in adept contemplatives.

While Antoine and I are in fact running experiments dedicated to answering this question, the fact remains that this research is largely exploratory and without strong empirical leads to work from. While I do believe that some level of meditation practice can provide greater reliability and accuracy in meta-cognitive reports, I don’t see any reason to value the reports of contemplative practitioners above and beyond those of any other particular niche group. If I want to know what it’s like to experience baseball, I’m probably going to ask some professional baseball players and not a Buddhist monk. At several points during the SRI I tried to express just this sentiment; that studying Buddhist monks gives us a greater insight into what-it-is-like to be a monk and not much else. I’m not sure if I succeeded, but I’d like to think I planted a few seeds of doubt.

There are several reasons for this. First, I part with Varela where he assumes that the Buddhist tradition and/or “Buddhist Psychology” have particularly valuable insights (for example, emptiness) that can’t be gleaned from western approaches. It might, but I don’t buy into the idea that the Buddhist tradition is its own kind of scientific approach to the mind; it’s not- it’s religion. For me the middle way means a lifelong commitment to a kind of meta-physical agnosticism, and I refuse to believe that any human tradition has a vast advantage over another. This was never more apparent than during a particularly controversial talk by John Dunne, a Harvard contemplative scholar, whose keynote was dedicated to getting scientists like myself to go beyond the traditional texts and veridical reports of practitioners and to instead engage in what he called “trialogue” in order to discover “what it is practitioners are really doing”. At the end of his talk one of the Dalai Lama’s lead monks actually took great offense, scolding John for “misleading the youth with his western academic approach”. The entire debacle was a perfect case-in-point demonstration of John’s talk; one cannot simply take the word of highly religious practitioners as some kind of veridical statement about the world.

This isn’t to say that we can’t learn a great deal about experience, and the mind, through meditation and careful introspection. I think at an early level it’s enough to just sit with ones breath and suspend beliefs about what exactly experience is. I do believe that in our modern lives; we spend precious little time with the body and our minds, simply observing what arises in a non-partial way.  I agree with Sogyal Rinpoche that we are at times overly dis-embodied and away from ourselves. Yet this practice isn’t unique to Buddhism; the phenomenological reduction comes from Husserl and is a practice developed throughout continental phenomenology. I do think that Buddhism has developed some particularly interesting techniques to aid this process, such as Vipassana and compassion-meditation, that can and will shed insights for the cognitive scientist interested in experience, and I hope that my own work will demonstrate as much.

But this is a very different claim from the one that says monastic Buddhists have a particularly special access to experience. At the end of the day I’m going to hedge my bets with the critical, empirical, and dialectical approach of cognitive science. In fact, I think there may be good reasons to suspect that high-level practitioners are inappropriate participants for “neurophenomenology”. Take for example, the excellent and controversial talk given this year by Willoughby Britton, in which she described how contemplative science had been too quick to sweep under the rug a vast array of negative “side-effects” of advanced practice. These effects included hallucination, de-personalization, pain, and extreme terror. This makes a good deal of sense; advanced meditation practice is less impartial phenomenology and more a rigorous ritualized mental practice embedded in a strong religious context. I believe that across cultures many religions share techniques, often utilizing rhythmic breathing, body postures, and intense belief priming to engender an almost psychedelic state in the practitioner.

What does this mean for cognitive science and enactivism? First, it means we need to respect cultural boundaries and not rush to put one cultural practice on top of the explanatory totem pole. This doesn’t mean cognitive scientists shouldn’t be paying attention to experience, or even practicing and studying meditation, but we have to be careful not to ignore the normativity inherent in any ritualized culture. Embracing this basic realization takes seriously individual and cultural differences in consciousness, something I’ve argued for and believe is essential for the future of 4th wave cognitive science. Neurophenomenology, among other things, should be about recognizing and describing the normativity in our own practices, not importing those of another culture wholesale. I think that this is in line with much of what Varela wrote, and luckily, the tools to do just this are richly provided by the continental phenomenological tradition.

I believe that by carefully bracketing meta-physical and normative concepts, and investigating the vast multitude of phenomenal experience in its full multi-cultural variety, we can begin to shed light on the mind-brain relationship in a meaningful and not strictly reductive fashion. Indeed, in answering the question “are monks expert introspectionists” I think we should carefully question the normative thesis underlying that hypothesis- what exactly constitutes “good” experiential reports? Perhaps by taking a long view on Buddhism and cognitive science, we can begin to truly take the middle way to experience, where we view all experiential reports as equally valid statements regarding some kind of subjective state. The question then becomes primarily longitudinal, i.e. do experiential reports demonstrate a kind of stability or consistency over time, how do trends in experiential reports relate to neural traits and states, and how do these phenomena interact with the particular cultural practices within which they are embedded. For me, this is the central contribution of enactive cognitive science and the best way forward for neurophenomenology.

Disclaimer: I am in no way suggesting enactivists cannot or should not study advanced buddhism if that is what they find interesting and useful. I of course realize that the M&L SRI is a very particular kind of meeting, and that many enactive cognitive scientists can and do work along the lines I am suggesting. My claim is regarding best practices for the core of 4th wave cognitive science, not the fringe. I greatly value the work done by the M&L and found the SRI to be an amazingly fruitful experience.

Google Wave for Scholarly Co-authorship: excerpt from Neuroplasticity and Consciousness Abstract

Gary Williams and I are working together on a paper investigating the consciousness and neuroplasticity. We’re using Google wave for this collaboration, and I must say it is an excellent co-authorship tool. There is nothing quite so neat as watching your ideas flow and meld together in real time. There are now new built in document templates that make these kinds of projects a blast. As an added bonus, all edits are identified and tracked in real time, letting you keep easy track of who wrote what. One of the most suprising things to come out of this collaboration is the newness of the thoughts. Whatever it is we end up arguing, it is definetely not reducible to the sum of it’s parts. As a teaser, I thought I’d post a thread from the wave I made this morning. This is basically just me rambling on about consciousness and plasticity after reading the results of our wave. I wish I could post the movie of our edits, but that will have to wait for the paper’s submission.

I have an idea I want to work in that was provoked by this paper:
http://www.jneurosci.org/cgi/content/abstract/30/18/6205

Somewhere in here I still feel a nagging paradox, but I can’t seem to put my finger on it. Maybe I’m simply trying to explain something I don’t have an explanation for. I’m not sure. Consider this a list of thoughts that may or may not have any relationship to the kind of account we want to make here.

They basically show that different synthesthetic experiences have different neural correlates in the structural brain matter. I think it would be nice to tie our paper to the (likely) focus of the other papers; the idea of changing qualia / changing NCCs. Maybe we can argue that, due to neural plasticity, we should not expect ‘neural representations’ for sensory experience between any two adults to be identical; rather we should expect that every individual develops their own unique representational qualia that are partially ineffable. Then we can argue that it this is precisely why we must rely on narrative scaffolding to make sense of the world; it is only through practice with narrative, engendered by frontal plasticity, that we can understand the statistical similarities between our qualia and others. Something is not quite right in this account though… and our abstract is basically fine as is.

So, I have my own unique qualia that are constantly changing- my qualia and NCCs are in dynamical flux with one another. However, my embodiment pre-configures my sensory experience to have certain common qualities across the species. Narrative explanations of the world are grounded in capturing this intersubjectivity; they are linguistic representations of individual sense impressions woven together by cultural practices and schema. What we want to say is that, I am able to learn about the world through narrative practice precisely because I am able to map my own unique sensory representations onto others.

I guess that last part of what I said is still weak, but it seems like this could be a good element to explore in the abstract. It keeps us from being too far away from the angle of the call though, maybe. I can’t figure out exactly what I want to say. There are a few elements:

  • Narratives are co-created, coherent, shareable, complex representations of the world that encode temporality, meaning, and intersubjectivity.
  • I’m able to learn about these representations of the world through narrative practice; by mapping my own unique dynamic sensory experience to the sensory and folk psychological narratives of others.
  • Narrative encodes sensory experience in ways that transcend the limits of personal qualia; they are offloaded and are no longer dynamic in the same way.
  • Sensory experience is in constant flux and can be thrown out of alignment with narrative, as in the case of most psychopathy.
  • I need some way to structure this flux; narrative is intersubjective and it provides second order qualia??
  • Narrative must be plastic as it is always growing; the relations between events, experiences, and sensory representations must always be shifting. Today I may really enjoy the smell of flowers and all the things that come with them (memory of a past girlfriend, my enjoyment of things that smell sweet, the association I have with hunger). But tommorow I might get buried alive in some flowers; now my sensory representation for flowers is going to have all new associations. I may attend to a completely different set of salient factors; I might find that the smell now reminds me of a grave, that I remember my old girlfriend was a nasty bitch, and that I’m allergic to sweet things. This must be reflected in the connective weights of the sensory representations; the overall connectivity map has been altered because a node (the flower node) has been drastically altered by a contra-narrative sensory trauma.
  • I think this is a crucial account and it helps explain the role of the default mode in consciousness. On this account, the DMN is the mechanism driving reflective, spontaneous narrativization of the world. These oscillations are akin to the constant labeling and scanning of my sensory experience. That they in sleep probably indicates that this process is highly automatic and involved in memory formation. As introspective thoughts begin to gain coherency and collude together, they gain greater roles in my over all conscious self-narrative.
  • So I think this is what I want to say: our pre-frontal default mode is system is in constant flux. The nodes are all plastic, and so is the pattern of activations between them. This area is fundamentally concerned with reflective-self relatedness and probably develops through childhood interaction. Further, there is an important role of control here. I think that a primary function of social-constructive brain areas is in the control of action. Early societies developed complex narrative rule systems precisely to control and organize group action. This allowed us to transcend simple brute force and begin to coordinate action and to specialize in various agencies. The medial prefrontal cortex, the central node, fundementally invoked in acts of social cognition and narrative comprehension, has massive reciprocal connectivity to limbic areas, and also pre-frontal areas concerned with reward and economic decision making.
  • We need a plastic default mode precisely to allow for the kinds of radical enculturation we go through during development. It is quite difficult to teach an infant, born with the same basic equipment as a caveman, the intricacies of mathematics and philosophy. Clearly narrative comprehension requires a massive amount of learning; we must learn all of the complex cultural nuances that define us as modern humans.
  • Maybe sensory motor coupling and resonance allow for the simulation of precise spatiotemporal activity patterns. This intrinsic activity is like a constant ‘reading out’ of the dynamic sensory representations that are being constantly updated, through neuroplasticity; whatever the totality of the connection weights, that is my conscious narrative of my experience.
  • Back to the issue of control. It’s clear to me that the prefrontal default system is highly sensitive to intersubjective or social information/cues. I think there is really something here about offloading intentions, which are relatively weak constructions, into the group, where they can be collectively acted upon (like in the drug addict/rehab example). So maybe one role of my narration system is simply to vocalize my sensory experience (I’m craving drugs. I can’t stop craving drugs) so that others can collectively act on them.

Well there you have it. I have a feeling this is going to be a great paper. We’re going to try and flip the whole debate on it’s head and argue for a central role of plasticity in embodied and narrative consciousness. It’s great fun to be working with Gary again; his mastery of philosophy of mind and phenomenology are quite fearsome, and we’ve been developing these ideas forever. I’ll be sure to post updates from GWave as the project progresses.

Snorkeling ’the shallows’: what’s the cognitive trade-off in internet behavior?

I am quite eager to comment on the recent explosion of e-commentary regarding Nicolas Carr’s new book. Bloggers have already done an excellent job summarizing the response to Carr’s argument. Further, Clay Shirky and Jonah Lehrer have both argued convincingly that there’s not much new about this sort of reasoning. I’ve also argued along these lines, using the example of language itself as a radical departure from pre-linguistic living. Did our predecessors worry about their brains as they learned to represent the world with odd noises and symbols?

Surely they did not. And yet we can also be sure that the brain underwent a massive revolution following the acquisition of language. Chomsky’s linguistics would of course obscure this fact, preferring us to believe that our linguistic abilities are the amalgation of things we already possessed: vision, problem solving, auditory and acoustic control. I’m not going to spend too much time arguing against the modularist view of cognition however; chances are if you are here reading this, you are already pretty convinced that the brain changes in response to cultural adaptations.

It is worth sketching out a stock Chomskyian response however. Strict nativists, like Chomsky, hold that our language abilities are the product of an innate grammar module. Although typically agnostic about the exact source of this module (it could have been a genetic mutation for example), nativists argue that plasticity of the brain has no potential other than slightly enhancing or decreasing our existing abilities. You get a language module, a cognition module, and so on, and you don’t have much choice as to how you use that schema or what it does. The development of anguage on this view wasn’t something radically new that changed the brain of its users but rather a novel adaptation of things we already and still have.

To drive home the point, it’s not suprising that notable nativist Stephen Pinker is quoted as simply not buying the ‘changing our brains’ hypothesis:

“As someone who believes both in human nature and in timeless standards of logic and evidence, I’m skeptical of the common claim that the Internet is changing the way we think. Electronic media aren’t going to revamp the brain’s mechanisms of information processing, nor will they supersede modus ponens or Bayes’ theorem. Claims that the Internet is changing human thought are propelled by a number of forces: the pressure on pundits to announce that this or that “changes everything”; a superficial conception of what “thinking” is that conflates content with process; the neophobic mindset that “if young people do something that I don’t do, the culture is declining.” But I don’t think the claims stand up to scrutiny.”

Pinker makes some good points- I agree that a lot of hype is driven by the kinds of thinking he mentions. Yet, I do not at all agree that electronic media cannot and will not revamp our mechanisms for information processing. In contrast to the nativist account, I think we’ve better reason than ever to suspect that the relation between brain and cognition is not 1:1 but rather dynamic, evolving with us as we develop new tools that stimulate our brains in unique and interesting ways.

The development of language massively altered the functioning of our brain. Given the ability to represent the world externally, we no longer needed to rely on perceptual mechanisms in the same way. Our ability to discriminate amongst various types of plant, or sounds, is clearly sub-par to that of our non-linguistic brethren. And so we come full circle. The things we do change our brains. And it is the case that our brains are incredibly economical. We know for example that only hours after limb amputation, our somatosensory neurons invade the dormant cells, reassigning them rather than letting them die off. The brain is quite massively plastic- Nicolas Carr certainly gets that much right.

Perhaps the best way to approach this question is with an excerpt from social media. I recently asked of my fellow tweeps,

To which an astute follower replied:

Now, I do realize that this is really the central question in the ‘shallows’ debate. Moving from the basic fact that our brains are quite plastic, we all readily accept that we’re becoming the subject of some very intense stimulation. Most social media, or general internet users, shift rapidly from task to task, tweet to tweet. In my own work flow, I may open dozens and dozens of tabs, searching for that one paper or quote that can propel me to a new insight. Sometimes I get confused and forget what I was doing. Yet none of this interferes at all with my ‘deep thinking’. Eventually I go home and read a fantastic sci-fi book like Snowcrash. My imagination of the book is just as good as ever; and I can’t wait to get online and start discussing it. So where is the trade-off?

So there must be a trade-off, right? Tape a kitten’s eyes shut and its visual cortex is re-assigned to other sensory modalities. The brain is a nasty economist, and if we’re stimulating one new thing we must be losing something old. Yet what did we lose with language? Perhaps we lost some vestigial abilities to sense and smell. Yet we gained the power of the sonnet, the persuasion of rhetoric, the imagination of narrative, the ability to travel to the moon and murder the earth.

In the end, I’m just not sure it’s the right kind of stimulation. We’re not going to lose our ability to read in fact, I think I can make an extremely tight argument against the specific hypothesis that the internet robs us of our ability to deep-think. Deep thinking is itself a controversial topic. What exactly do we mean by it? Am I deep thinking if I spend all day shifting between 9 million tasks? Nicolas Carr says no, but how can he be sure those 9 million tasks are not converging around a central creative point?

I believe, contrary to Carr, that internet and social media surfing is a unique form of self stimulation and expression. By interacting together in the millions through networks like twitter and facebook, we’re building a cognitive apparatus that, like language, does not function entirely within the brain. By increasing access to information and the customizability of that access, we’re ensuring that millions of users have access to all kinds of thought-provoking information. In his book, Carr says things like ‘on the internet, there’s no time for deep thought. it’s go go go’. But that is only one particular usage pattern, and it ignores ample research suggesting that posts online may in fact be more reflective and honest than in-person utterances (I promise, I am going to do a lit review post soon!)

Today’s internet user doesn’t have to conform to whatever Carr thinks is the right kind of deep-thought. Rather, we can ‘skim the shallows’ of twitter and facebook for impressions, interactions, and opinions. When I read a researcher, I no longer have to spend years attending conferences to get a personal feel for them. I can instead look at their wikipedia, read the discussion page, see what’s being said on twitter. In short, skimming the shallows makes me better able to choose the topics I want to investigate deeply, and lets me learn about them in whatever temporal pattern I like. Youtube with a side of wikipedia and blog posts? Yes please. It’s a multi-modal whole brain experience that isn’t likely to conform to ‘on/off’ dichotomies. Sure, something may be sacrificed, but it may not be. It might be that digital technology has enough of the old (language, vision, motivation) plus enough of the new that it just might constitute or bring about radically new forms of cognition. These will undoubtably change or cognitive style, perhaps obsoleting Pinker’s Bayesian mechanisms in favor of new digitally referential ones.

So I don’t have an answer for you yet ToddStark. I do know however, that we’re going to have to take a long hard look at the research review by Carr. Further, it seems quite clear that there can be no one-sided view of digital media. It’s not anymore intrinsically good or bad than language. Language can be used to destroy nations just as it can tell a little girl a thoughtful bed time story. If we’re to quick to make up our minds about what internet-cognition is doing to our plastic little brains, we might miss the forest for the trees. The digital media revolution gives us the chance to learn just what happens in the brain when its’ got a shiny new tool. We don’t know the exact nature of the stimulation, and finding out is going to require a look at all the evidence, for and against. Further, it’s a gross oversimplification to talk about internet behavior as ‘shallow’ or ‘deep’. Research on usage and usability tells us this; there are many ways to use the internet, and some of them probably get us thinking much deeper than others.