In this line of research, we develop computational models expressing how visceral signals are integrated in the brain during decision-making and self-awareness. Here, the brain is modeled as a hierarchical network in which cascading streams of predictions and prediction errors are integrated across increasingly abstract and amodal cortical layers. At the lowest level of the hierarchy, unimodal predictions encode sensory expectations (e.g., a motion stimulus will be left vs right); at the highest level a generative ‘self-model’ encodes metacognitive expectations about the overall reliability or volatility of global cortical patterns:
This model is then used to understand how biases in perceptual decisions (e.g., the tendency to report left vs right motion irrespective of sensory evidence) and metacognitive self-awareness (the correspondence of performance and subjective awareness) depend upon the integration of different hierarchical signals throughout this network. By modelling self-awareness using a signal-theoretic approach, we can quantify the accuracy of introspection in individual subjects and relate such measures to fluctuations in visceral signals and arousal. Further, by applying Bayesian models of decision-making, we can reveal how such visceral signals influence the cortical computations underlying both perceptual and metacognitive decisions.
As early examples of this line of research, we have recently shown for example that metacognitive biases for uncertain perceptual experiences can be reversed by manipulating unconscious ‘gut feelings’: