Kay & Yeatman, 2017, eLife: Bottom-up and top-down computations in word- and face-selective cortex

Kay KN, Yeatman JD. Bottom-up and top-down computations in word- and face-selective cortex. ELife. 2017, http://dx.doi.org/10.7554/eLife.22341

As your eyes scan this page, your visual system performs a series of computations that allow you to derive meaning from the printed words. The visual system solves this task with such apparent ease that you may never have thought about the challenges that your brain must overcome for you to read a page of text. The brain must overcome similar challenges to enable you to recognize the faces of your friends. The goal of this paper was to develop a quantitative model that characterizes the computations performed by regions of the visual system that are specialized for processing words and faces.


The ability to read a page of text or recognize a person’s face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.