Vision for action: neural representation of real-world scene affordances in the human brain

Summary

The human brain contains several visual processing regions that selectively activate when we view real-world scenes, such as landscapes or city streets, and are therefore thought to be critical for behavioural interactions with scenes, such as navigation and action planning. It is poorly understood, however, how the visual system extracts information relevant for actions from retinal inputs: how it determines what actions are afforded by the visual environment that is currently in view. I will study this process by capitalizing on recent technological advances in both computer vision and cognitive neuroscience. First, I will quantify the visual features that are necessary to solve different scene tasks using deep neural networks (DNNs). These state-of-the-art computer vision models have recently led to great improvement in automatic recognition of objects, but can be trained on other tasks. I will use these networks as computational tools to measure scene information relevant for actions, by training different DNNs on navigational (“is there a path?”) and action-related (“can I walk here?”) tasks. Second, I will measure the neural representations underlying these tasks using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings. Using a known pattern analysis method called representational similarity analysis, these measurements will be combined (spatiotemporal fusion) to achieve neural measurements of visual processing with high spatial and temporal resolution. Crucially, by comparing the visual features learned by different DNNs against the neural representations evoked under different scene recognition tasks, I will identify the visual features that our brains extract for the purpose of determining the action affordances of a scene. As a result, we will learn how the retinal input is transformed to arrive at behaviourally relevant neural representations of real-world environments.

Details

Project number

VI.Veni.194.030

Main applicant

Dr. I.I.A. Groen

Affiliated with

City University of New York

Duration

02/09/2019 to 31/10/2022