Dynamic and stable population coding of attentional instructions coexist in the prefrontal cortex

Edited by Robert Desimone, Massachusetts Institute of Technology, Cambridge, MA; received February 14, 2022; accepted September 6, 2022
September 26, 2022
119 (40) e2202564119

Significance

Considerable evidence suggests that neuronal ensembles in the prefrontal cortex (PFC) represent behaviorally relevant information in a dynamic manner. Here, we found that processing of attentional instructions is not ubiquitous within PFC. Comparison across two prefrontal regions, the frontal eye field (FEF) and the ventrolateral PFC (vlPFC), showed that the FEF ensemble encodes robustly spatial information, whereas the vlPFC ensemble encodes robustly both spatial and color information. Furthermore, decoding spatial and color information from vlPFC in the high dimensional activity state space indicated stronger dynamics for color, across the sensory and memory periods. However, dynamic vlPFC activity contained time-invariant color information within a low-dimensional subspace of neural activity that allowed for stable decoding of color across time.

Abstract

A large body of recent work suggests that neural representations in prefrontal cortex (PFC) are changing over time to adapt to task demands. However, it remains unclear whether and how such dynamic coding schemes depend on the encoded variable and are influenced by anatomical constraints. Using a cued attention task and multivariate classification methods, we show that neuronal ensembles in PFC encode and retain in working memory spatial and color attentional instructions in an anatomically specific manner. Spatial instructions could be decoded both from the frontal eye field (FEF) and the ventrolateral PFC (vlPFC) population, albeit more robustly from FEF, whereas color instructions were decoded more robustly from vlPFC. Decoding spatial and color information from vlPFC activity in the high-dimensional state space indicated stronger dynamics for color, across the cue presentation and memory periods. The change in the color code was largely due to rapid changes in the network state during the transition to the delay period. However, we found that dynamic vlPFC activity contained time-invariant color information within a low-dimensional subspace of neural activity that allowed for stable decoding of color across time. Furthermore, spatial attention influenced decoding of stimuli features profoundly in vlPFC, but less so in visual area V4. Overall, our results suggest that dynamic population coding of attentional instructions within PFC is shaped by anatomical constraints and can coexist with stable subspace coding that allows time-invariant decoding of information about the future target.
The primate prefrontal cortex (PFC) has been implicated in executive control including working memory, attention, and decision-making functions (14). Although the evidence on the executive role of PFC in cognition is indisputable, there is currently no consensus on how task related parameters are encoded by different PFC regions during goal-directed behavior. The prevailing view on prefrontal function posits that PFC neurons are not inherently feature selective, but rather represent task relevant information in an adaptive way according to behavioral context (5) in order to modulate processing in other areas and guide behavior (4). Whether and how this dynamic representation of task-relevant information is constrained by anatomy and the encoded variable is still unclear.
Several electrophysiological studies have examined functional specialization within lateral PFC by comparing how spatial and nonspatial stimulus features are encoded by single PFC neurons (611). Although results from single neuron analysis are informative and have provided important insights into how information is organized and encoded in the brain, they can potentially underestimate information encoded in sparse patterns of activity across a population of neurons. Averaging across trials and single neurons or relying on the average fraction of task relevant neurons to infer the contribution of cortical areas in encoding specific stimulus parameters, can be misleading in cases of sparse representations of information. To overcome this limitation, more recent studies have employed multivariate classification methods to examine whether information about task relevant parameters can be decoded from single trial activity patterns across a population of neurons (1216).
Decoding the information content of single-trial population activity patterns can also provide insights into the stationarity of the neural code (i.e., infer whether the neural code for a specific variable is dynamic and changes over time or whether it is stable). A dynamic coding scheme predicts that when task parameters change, activity of the network shifts to a new state and stimuli are encoded differently according to behavioral context. This dynamic pattern of information coding is particularly true for PFC (12, 17), a feature that may underlie cognitive flexibility. On the other hand, some studies have shown that the population code for spatial attention in PFC is remarkably stable (14, 18). Is the PFC code intrinsically stable or dynamic for all task relevant parameters or does the stationarity of the code depend on the exact feature and PFC area? These are fundamental aspects of the neural code. Although some studies have explored the stationarity of the neural code in different areas and paradigms (12, 17, 1921), a direct comparison of how different behaviorally relevant attributes are encoded in the same and different PFC areas is missing.
To answer some of these questions, we tested how population activity patterns in two distinct PFC regions, the frontal eye field (FEF) and the ventrolateral PFC (vlPFC) anterior to the FEF, encode different attentional instructions. Both FEF and vlPFC are critical for the control of spatial and feature attention, respectively (2225). Our main goal was to examine whether spatial and color attentional instructions are encoded in a stationary or dynamic manner by PFC ensembles across two task epochs with distinct requirements, a sensory presentation period and a memory period. Moreover, we aimed to compare the neural code for the different attributes across the two PFC areas in order to assess whether the code for a given attribute depends on the anatomical area. To this end, we recorded simultaneously from the FEF and vlPFC, as well as from visual area V4, in a cued attention task with either a spatial or a color cue. We employed decoding approaches from spiking and LFP signals. Our results show anatomical specificity in the neural code. Spatial information was decoded from both FEF and vlPFC but with higher accuracy from the FEF population, whereas color information was decoded more robustly from vlPFC during the cue and delay periods. More importantly, decoding color and spatial information from a high-dimensional vlPFC activity state space indicated that the population code for the color instruction changed during the delay period in vlPFC, whereas the code for location was relatively stable. Despite the dynamic encoding of the color instruction in vlPFC, we found a low-dimensional shared subspace where the code for color was stable across the two epochs. These results show that vlPFC maintains behaviorally relevant time-invariant information despite differences in the code between the cue presentation and memory periods. Furthermore, encoding of stimuli features was dramatically influenced by spatial attention in vlPFC where only features of attended stimuli were encoded, in contrast to V4, where stimuli features were encoded irrespective of attention. Altogether, these results shed light into the functional anatomy of visual processing and working memory in the prefrontal and visual cortices and highlight how the neural code for attentional instructions is influenced by anatomy, functional selectivity and task demands.

Results

We trained two monkeys in a cued covert attention task (Fig. 1A). Briefly, following initial fixation, in some trials, a spatial or a color cue appeared at the center of the screen (spatial or color pre-cueing). The spatial cue instructed the location of the future target (lower left/right, upper left/right) and the color cue instructed the color of the future target (red, blue or green). Animals had to maintain central fixation and memorize the spatial or color cue during the subsequent delay period. Subsequently, four sinusoidal colored gratings of different orientations were presented at equal distances from the center of the monitor. Monkeys were required to report the target’s orientation using a lever that could be moved to three positions while maintaining central fixation. In other trials, following the initial fixation, the four stimuli array appeared on the screen before any instruction on where to attend (spatial postarray-cueing). Subsequently, the spatial cue appeared at the center of the screen indicating the location of the target. Monkeys had to report the orientation of the target as in the pre-cueing trials. Both monkeys performed very well, with accuracies above 85% (pre-cueing trials: monkey PT 90%, monkey DL 88%; postarray-cueing trials: monkey PT 96%, monkey DL 91%).
Fig. 1.
(A) Behavioral task. Three different types of trials were used. Monkeys were required to fixate a central fixation spot. In the spatial and color pre-cueing trials (upper two rows), the fixation spot was subsequently replaced by the spatial or color cue. Following a delay period during which the monkeys had to memorize the cue instruction (location or color of future target), an array of four gratings appeared on the screen. The monkeys had to shift attention to the target stimulus covertly and respond about the orientation of the grating using a lever. In some trials, the array appeared first followed by the spatial cue (spatial postarray-cueing, third row). (B) Recording locations shown on the surface of the frontal lobe of monkey PT (Upper) and monkey DL (Lower). Red circles indicate penetrations in the FEF and blue circles penetrations in vlPFC. Brains reconstructions were done using the Caret software package (SI Appendix, Supporting Methods).
We recorded multiunit activity from the right vlPFC, FEF, and visual area V4 simultaneously, in the two monkeys (Fig. 1B). In vlPFC, we recorded a total of 365, 344, and 335 units in the spatial pre-cueing, color pre-cueing and spatial post-cueing conditions, respectively, in FEF, we recorded 110, 109, and 105 units in the same conditions and in V4, 518, 510, and 483 units, respectively.

Spatial and Color Information Is Encoded in PFC in an Anatomically Specific Manner.

To test whether PFC neurons encoded spatial and/or color information, we first quantified the selectivity of each unit by employing the percentage of explained variance (PEV) metric in a time-resolved manner in the pre-cueing trials as previously described (11, 26) (SI Appendix, Fig. S1 A–D). We found that in vlPFC, a significant proportion of units showed spatial/color selectivity during the cue and/or delay periods (∼35–55% for spatial selectivity and 20–35% for color selectivity; P < 0.001, binomial test against chance of 5%). A significant proportion of units with spatial/color selectivity was also found in FEF during the same periods (∼40–60% for spatial selectivity and 10–25% for color selectivity; P < 0.001, binomial test against chance of 5%) (SI Appendix, Fig. S2). Note that the range of percentages is related to the variability in selectivity over time. The proportion of vlPFC units that were selective for both color and spatial instruction was 15% in the cue (600–800 ms) and 16% in the delay (200–400 ms into the delay) periods (binomial test against chance of 5%, P < 0.001; 24% and 27% respectively, among selective units), whereas in FEF, 9% and 14% of units were selective for both color and spatial instruction in the same periods (P = 0.1 and P < 0.001, respectively, binomial test; 14% and 20%, respectively among selective units) (SI Appendix, Fig. S3).
To better appreciate the contribution of each area in the encoding of location and color, we asked whether information about the cue identity could be reliably decoded from firing rate responses across the neuronal population in each region on a trial-by-trial basis. To this end, we used a linear support vector machine (SVM) algorithm that classifies responses based on the pattern of activity across the population in each area. SVM decoders have been previously shown to reliably extract multidimensional information encoded by neuronal ensembles (10, 14, 27). To allow for a direct comparison of decoding performance across the spatial and color cues in each area, we matched the number of units for the two cues in each area by employing a resampling procedure (50 resamples) (Fig. 2). Moreover, for a direct comparison of decoding performance across the two areas, we matched the number of units in the two areas for each cue by a similar resampling procedure (SI Appendix, Fig. S4). Finally, to directly compare results for color and spatial cueing, we equated the number of classes between the two conditions i.e., we considered the three color cues (red, blue, and green) and three out of four spatial cues (upper left, lower left, and lower right). Only units from sessions with at least 25 trials in each condition were included in the analysis. Based on these constraints, our dataset consisted of 332 vlPFC units, 101 FEF units and 477 V4 units for the spatial cueing condition, and of 297 vlPFC units, 86 FEF units and 430 V4 units for the color cueing condition.
Fig. 2.
Decoding accuracies obtained from the populations of vlPFC and FEF units during the cue and delay periods. (A) Mean decoding accuracy over time for the vlPFC neuronal population for the location (black line) and color (red line) instructed by the cue. Responses were classified into three classes (Upper Left, Lower Left, and Lower Right or red, green, blue) with equal prior probabilities, thus, chance performance was at 33.3% shown by the horizontal dashed line. Zero on the x-axis corresponds to the onset of the cue in the pre-cueing trials. Vertical dashed black line indicates the onset of the delay period. Rightmost part corresponds to data in the delay period aligned on array onset. (B) Same as (A) but for FEF. In all graphs, the two horizontal color lines at the top show periods with significant decoding accuracy compared to chance for each line (permutation test, P < 0.001). The horizontal line at the bottom of each graph shows time periods with significant differences between the two conditions (spatial vs. color, cluster-based permutation test, P < 0.001).
A comparison across the two PFC regions for the two cues showed that while spatial information could be reliably decoded from both regions, color information was more robustly and consistently decoded from the vlPFC population (Fig. 2 A and B). Specifically, in vlPFC the identity of both spatial and color cues could be robustly decoded with accuracies significantly above chance (permutation test, P < 0.001; Fig. 2A). Decoding accuracies peaked at around 150 ms and remained high (around 80–90%) throughout the cue and delay periods. Notably, performance was significantly higher for the spatial cue (P < 0.001, cluster-based permutation test). In FEF, decoding accuracy for the spatial cue also peaked at about 150 ms and was significantly above chance throughout the trial (P < 0.001, permutation test; Fig. 2B). By contrast, decoding performance for the color cue was significantly lower than that for the spatial cue (P < 0.001, cluster-based permutation test) and exceeded chance only transiently during the cue and delay epochs (permutation test; Fig. 2B). This difference from vlPFC was not due to the smaller size of the FEF population (86 vs. 297 neurons). When we equated the populations in vlPFC and FEF, decoding performance for color in vlPFC was significantly above chance and significantly higher than that in the FEF (P < 0.001, cluster-based permutation test, SI Appendix, Fig. S4B). These results demonstrate that the FEF population carries weak color information, and both PFC regions carry robust spatial information about the future target. A direct comparison of decoding accuracies for spatial cueing obtained from vlPFC and FEF populations revealed that decoding accuracies from the FEF were significantly higher than those from vlPFC throughout the cue and delay periods (P < 0.001, cluster-based permutation test, SI Appendix, Fig. S4A). A similar analysis carried out for V4 showed that although V4 receptive fields (RFs) were centered in the lower left quadrant and did not include the fovea where the cue was presented, color information could be decoded from V4 firing rates with accuracies significantly above chance, whereas spatial information could not (SI Appendix, Supporting Results).
Given that the percentage of FEF neurons responding to color is significantly greater than what is expected by chance (binomial test, P < 0.001) and that it is not significantly different from that in vlPFC (χ2 test, P = 0.19), it is odd that robust decoding of color information is not possible from the FEF population. Results of additional analyses show that a small percentage of color selective neurons in FEF with relatively sustained selectivity (9/86 units) carry as a group some color information in the delay period, however, the overall low and transient selectivity across all FEF units does not allow for robust decoding of color information at the population level (SI Appendix, Supporting Results and Fig. S5).

Temporal Dynamics of the Population Code for Color and Spatial Instructions in PFC.

We next sought to explore the dynamics of the neural code in the two prefrontal regions. One way to examine population dynamics in time is to study the stationarity of the code by training a classifier at one time period and testing with data from another time period. If the contribution of individual cells within the network remains the same over time, a classifier trained at one time should generalize equally well at other times. On the other hand, if population dynamics change, the generalization of a decoder over time will be poor.
Results of cross-temporal decoding analysis are shown in Fig. 3. Decoding accuracies that fall across the diagonal were obtained by training and testing the classifier at the same time bin (i.e., they are equivalent to those in Fig. 2 A and B). To estimate how well the code generalizes at other time bins (i.e., the stationarity or dynamic nature of the code), we compared for each training time bin the decoding performance on the diagonal to that of other test bins along the same vertical line (one-sided permutation test, P < 0.001).
Fig. 3.
Temporal dynamics of the spatial and color code in PFC. (A) Cross-temporal classification analysis for the spatial code on the vlPFC population. (B) Same as (A) but for the color code in vlPFC. (C) Same as (A) but for FEF. Color scale represents decoding accuracies. Training time of the classifier is shown on the abscissa and test time is shown on the ordinate. Time is relative to cue onset (or array onset in the rightmost/uppermost part). Vertical and horizontal dashed white lines denote delay onset. Black dashed contours enclose time periods with accuracies not significantly different from those on the diagonal (permutation test, P < 0.01).
The results indicated differences in the code for spatial and color information during the cue and delay periods in vlPFC (Fig. 3 A and B). We found that the code for spatial information in vlPFC generalized between the cue and delay periods. Decoding performance was not significantly different from that on the diagonal when training using data from the late cue period and testing using data from the early delay period and vice versa (Fig. 3A). Similar results were obtained for the spatial cue from the FEF (Fig. 3C). By contrast, the code for color information in vlPFC did not generalize between the cue and delay periods; training the classifier with data from the cue/delay period and testing with data from the same period gave significantly higher accuracies than those obtained from testing with data from the delay/cue period, respectively (Fig. 3B). A direct quantitative statistical comparison between results from the two cueing conditions in vlPFC showed that during the early delay period, a higher number of time bins with similar accuracies to those on the diagonal can be found in the spatial cueing condition, compared to the color cueing condition (SI Appendix, Fig. S7, χ2 test, P < 0.01).
A closer look into the data identifies three periods with distinct profiles for both types of cues. During an early period, immediately after cue presentation (0 to ∼350 ms), the code is dynamic and does not generalize to other time epochs (Fig. 3 AC). Thus, in agreement with a previous study (17), we find that in the early period, the pattern of activity that differentiates between cues is unique and does not extend into other time periods. Subsequently, during the late cue period (350–900 ms) the code enters a stationary regime with good generalization across different time bins within this period (Fig. 3 AC). Notably, the stationarity of the spatial code extends also into the delay period both in vlPFC and the FEF (Fig. 3 A and C). On the other hand, the color code in vlPFC enters a new dynamic state during the early part of the delay period (Fig. 3B). These results suggest a more stationary code for spatial information in vlPFC and FEF across the cue and delay epochs and a rather dynamic code for color information in vlPFC.
The temporal evolution of population activity across the cue and delay epochs was further examined by plotting the activation state trajectory as previously described (17). Briefly, the temporal evolution of vlPFC population activity was expressed as a multidimensional trajectory, in which each point corresponded to the instantaneous firing rate of the population in an N-dimensional space at a particular time window, with N being the number of neurons comprising the neuronal population. In Fig. 4A, we plot three hypothetical trajectories, one for each cue (red, green, blue, or upper left, lower left, and lower right). For simplicity, trajectories are shown in three dimensions, however, the actual computations were carried out in the N-dimensional space. We first confirmed that population trajectories for the three cues in each cueing condition were separated in the N-dimensional space (i.e., that population activity could discriminate the three cued locations and the three cued colors). To this end, we calculated the multidimensional Euclidean distance between the different trajectories for each cueing condition (Fig. 4B). We found that the vlPFC population could differentiate the three color/spatial cues throughout the cue and delay periods with higher overall discriminability in the early cue period (Fig. 4B). Notably, during the early cue presentation period, the average distance between trajectories was higher for the spatial compared to the color cue (cluster-based permutation test, P < 0.001) suggesting a clearer differentiation of spatial information. Following this initial period, during the late cue and delay epochs, the network could still discriminate among the three cues equally well for location and color cues.
Fig. 4.
Neuronal population dynamics. (A) Schematic example of three different trajectories corresponding, for example, to the three different color cues in a three-dimensional state space. The distance between trajectories/states at any given time t in the N-space (where N the size of the population) is a measure of the discriminability of the code based on population activity. (B) The mean multidimensional distance between responses for the three spatial cues is shown over time in black and the mean distance between responses for the three color cues is shown in red. Periods with significant differences compared to the baseline (upper 95th percentile) are indicated for each cue type by the horizontal lines at the top of the graph (permutation test, P < 0.001) The horizontal line at the bottom of the graph, shows time periods with significant differences between the two lines (cluster-based permutation test, P < 0.001). (C) Instantaneous velocity through the multidimensional state space for each cue type, as a function of time. Instantaneous velocity was calculated along each trajectory as d(P1tn, P1t+n)/2n where d(P1tn, P1t+n) is the distance between two states along the same trajectory at time points that differ by 2n. Velocity was subsequently averaged across the three trajectories. The black line shows average velocity across the three spatial cues and the red line shows the average velocity across the three color cues. Periods with significant differences compared to baseline (upper 95th percentile) are indicated by the horizontal lines at the top of the graph (permutation test, P < 0.001). The horizontal line at the bottom of the graph, indicates time periods with significant differences between the two lines (cluster-based permutation test, P < 0.001). Dashed vertical lines indicate the onset of the cue and delay period.
We then asked how the population activity changes over time for the two cueing conditions. A previous study showed that the rate of change (velocity) of the activation state trajectory is a more sensitive measure of changes in the network state compared to changes in global activity levels (17). Thus, we estimated the instantaneous velocity over time as d(pt+npt−n)/2n, where d(pt+npt−n) is the multidimensional Euclidean distance between two points along state trajectory p corresponding to times t+n and tn (n = 3 ms) both for spatial and color cues. This analysis indicated a rapid increase in velocity following cue onset after which the code settled at a more stable state (Fig. 4C and SI Appendix, Movie S1). At the onset of the delay period, we found another rapid change in the activation state for color that remained higher compared to the spatial cue throughout the delay period (cluster-based permutation test, P < 0.001). Again, this indicates that during working memory the population code in vlPFC undergoes a rapid change that is specific to the color cue in agreement with the results obtained from the decoding analysis. Time periods during which the multidimensional velocity is reduced coincide with epochs that the code is stationary whereas increases in velocity accompany time periods that the code is dynamic (compare with Fig. 3).
Movie S1.
Illustration of neuronal population dynamics. 3-dimensional PCA space projections of the original N-dimensional vlPFC population state trajectories. Each of the three trajectories corresponds to a different spatial (upper left, lower left and lower right) or color (red, green, blue) cue. Colors of each trajectory correspond to different task epochs: fixation (grey), cue (red), early delay (green), late delay (blue). A rapid increase in velocity following cue onset is evident for both spatial and color cues. At the onset of the delay period, a larger increase in velocity was observed for the color compared to the spatial cue. Distance and velocity changes in activation state were quantified in Fig. 4. Note that the quantities in Fig. 4 were calculated in the original N-dimensional space. PCA projections shown here are for illustration purposes only.
A dynamic code could be the result of different sets of neurons carrying information at different time windows and/or overall increases or decreases in the firing rate (e.g., due to changes in sensory and cognitive requirements or neural adaptation). To explore these possibilities, we examined the contribution of different units to the dynamic or stationary encoding of color and spatial information in vlPFC. We identified the 60 most selective units in one time window and tested how well these perform in other time epochs. Selectivity was quantified using the PEV metric. In Fig. 5A, the vertical axis denotes the time at which the most selective units were picked, whereas the horizontal axis the time at which decoding (training and testing) was performed. As expected, accuracy is highest across the diagonal as training and testing is performed on the same time bin using the 60 most selective units at that time. The performance of the decoder in other time windows using these units (i.e., generalization) can be viewed in lines running parallel to the horizontal axis. Contours in Fig. 5A enclose time bins with accuracy not significantly different from the diagonal (one-sided permutation test, P < 0.01). For the color cue, the 60 most selective units in the late cue period did not perform well in the delay epoch and vice versa, indicating that different neurons carry color information in the two epochs (Fig. 5A). By contrast, for the spatial cue, the 60 most selective units in each time bin during the late cue period performed well also during the early delay epoch and vice versa (SI Appendix, Fig. S8A).
Fig. 5.
Dynamic encoding of color in vlPFC (A) Decoding accuracies for color when using the 60 most selective vlPFC units on each time window (specified on the y axis) and then training and testing with all the remaining neurons at other time windows (specified by the x axis). Time is relative to cue onset (or array onset in the rightmost/uppermost part). Vertical and horizontal dashed white lines denote delay onset. Black dashed contours enclose time periods with accuracies not significantly different from those on the diagonal (permutation test, P < 0.01). (B) PEV across time. Groups of neurons identified based on sustained selectivity are indicated by the color bars along the y axis. Each row illustrates the z-scored PEV of one unit sorted within each group by the time they first showed significant PEV (same conventions as in SI Appendix, Fig. S1). (C) Mean decoding accuracy over time for groups identified in (B). Colors match those displayed along the y axis in (B). Horizontal color lines at the top show periods with significant decoding accuracy compared to chance for each line (permutation test, P < 0.001). Color line at the bottom indicates time periods with significant differences between the two groups (cluster-based permutation test, P < 0.001). All other conventions as in Fig. 2. (D) Instantaneous velocity through the multidimensional state space for the different temporal selectivity clusters, color coded as in (B). Periods with significant differences compared to the baseline (upper 95th percentile) for each group are indicated by the horizontal lines at the top of the graph (permutation test, P < 0.001). Color line at the bottom indicates time periods with significant differences between the two groups (cluster-based permutation test, P < 0.001). All other conventions are as in Fig. 4C. (E) Cross-temporal classification analysis for the group 1 subpopulation. Same conventions as in Fig. 3.
Our results so far suggest that the dynamic nature of the code for color is due, at least in part, to units with dynamic selectivity across time. We therefore asked whether a group of neurons with relatively sustained selectivity would allow for a stable decoding of color information across task epochs. To this end, we identified units with significant selectivity as measured by PEV, for a minimum number of 34 bins in total through the cue and memory periods (430 ms; see Materials and Methods and Fig. 5B). Subsequently, we decoded color information from the two groups: the subpopulation with sustained selectivity and the rest of the population (Fig. 5C). The population of units with sustained selectivity through the cue and memory periods gave the highest decoding accuracies throughout the trial and therefore drove to a large extent decoding performance at the population level (Fig. 5C, group 1). Moreover, this subpopulation was mainly responsible for the rapid increase in velocity at the onset of the memory period (Fig. 5D). Notably, although these units are highly selective during both the cue and delay periods (Fig. 5B) they do not generalize well outside the delay period (Fig. 5E). This suggests a reconfiguration of the population code during the delay period as demonstrated by the rapid increase in velocity. A similar analysis was carried out for the spatial cue across vlPFC units, and identified two groups, with a stable coding pattern in the transition from the cue to the memory period (SI Appendix, Fig. S8). A comparison across the highly color selective group (Fig. 5B, n = 63) and the highly location selective group (SI Appendix, Fig. S8B, n = 130) showed that 33 units participated in both (χ2 test, P < 0.001).
Accumulating evidence suggests that, despite the dynamic representation of working memory contents in high-dimensional spaces, a stable readout of information across time can be obtained from low-dimensional subspaces (21, 28, 29). To explore whether a stable subspace for color/spatial information exists across the sensory presentation and the working memory periods, we calculated a memory subspace defined by the first two principal components (PCs) computed from the average activity during the last 500 ms of the delay, across the three color/spatial cues. Subsequently, we examined whether color/spatial information could be decoded from activity from other time windows projected onto this subspace. Activity projected on the mnemonic subspace resulted in above chance decoding performance during the cue epoch (P < 0.001, Fig. 6A for the color and SI Appendix, Fig. S9A for the spatial cue). However, projections on dynamic subspaces that were defined in each time window (100 ms windows, advanced in 10 ms) yielded significantly higher decoding accuracy in the cue epoch (P < 0.001, cluster-based permutation test). These results are in line with a previous report (21) and suggest that although sensory and working memory representations for the attentional instruction differ in PFC, the mnemonic subspace carries significant cue information during the cue period.
Fig. 6.
Activity projections onto low-dimensional coding subspaces for the color cue and the vlPFC population. (A) Decoding of cue identity after projecting single trial activity to either a memory (black line) or dynamic (red line) coding subspace. Horizontal color lines at the top show periods with significant decoding accuracy compared to chance for each line (permutation test, P < 0.001). Horizontal line at the bottom shows time periods with significant differences between memory vs. dynamic subspace projections (cluster-based permutation test, P < 0.001). (B) Activity projected on the shared cue/memory subspace. Each point corresponds to a single trial. Points are color coded according to the actual color of the cue (red, green, blue). Crosses indicate time averaged activity in the cue epoch and circles activity in the memory epoch. (C) Cross-temporal decoding of projected activity onto the shared subspace. Vertical and horizontal dashed white lines denote delay onset. Black dashed contours enclose time periods with accuracies significantly different from those of the diagonal (permutation test, P < 0.001).
Going one step further, we wondered whether we could identify a shared subspace that maximized variance during both the cue and delay epochs. Should such a subspace exist, it could allow downstream neurons to read out time-invariant information in a stable manner despite the dynamic code. We applied manifold optimization and identified a shared subspace that maximized the sum of projected variance during the last 500 ms of both the cue and memory epochs [(30), SI Appendix, Supporting Methods]. This shared subspace captured 95% of stimulus variance during memory and 89% during the cue period for the color cue (79% and 82%, respectively, for the spatial cue). Subsequently, we examined whether this projected variance was task informative by projecting activity onto the shared subspace. The results show that projected stimulus activity was discriminable during both the cue and delay epochs (Fig. 6B and SI Appendix, Fig. S9B for the color and spatial cue, respectively). In addition, cross-temporal decoding showed that classifiers trained in this subspace could discriminate cue identity with high accuracy during both the cue and delay epochs (Fig. 6C and SI Appendix, Fig. S9C for the color and spatial cue, respectively). Moreover, the highly selective subpopulations for color (Fig. 5B) and spatial (SI Appendix, Fig. S8B) instruction contributed the larger weights in the shared subspace (permutation test, P < 0.01). Thus, despite the dynamic transition of the population code from the cue to the memory epoch, it is possible to identify a shared, low-dimensional task relevant subspace achieving high decoding accuracy throughout the trial.
Overall, based on analysis of spiking activity two important findings highlight how behaviorally relevant spatial and color attentional instructions are encoded in vlPFC and FEF. First, whereas the vlPFC population adaptively processes behaviorally relevant spatial and color information during encoding and maintenance, the FEF population carries mainly spatial information. Second, although decoding from time optimized subspaces suggests stronger dynamics for the color code within vlPFC and more stationary encoding for the spatial code, vlPFC does maintain time-invariant color instructions in a low-dimensional subspace of neural activity.

Attention Influences Encoding of Feature Information in PFC.

Next, we explored, whether information about stimuli’s features is carried by the vlPFC population regardless of the behavioral relevance of the stimuli. Previous studies have shown that during passive viewing, stimuli parameters such as location and color are encoded by single neurons in PFC (8). However, most studies have suggested that object or feature selectivity in PFC emerges as a result of task relevance (3133). Here, we asked two questions. First, does the PFC population encode features of unattended/ignored stimuli? Second, are all features of attended stimuli encoded by PFC or only those that are task relevant?
To answer these questions, we exploited the postarray-cueing trials in which the array of stimuli appeared before any instruction on where to attend, followed by the spatial cue that indicated the location of the target (Fig. 1A, third row). We first asked whether information about color could be reliably decoded from the activity pattern of the vlPFC population during the array presentation period (i.e., in the absence of spatial attention) and following the spatial cue (i.e., with attention directed to the target) for stimuli that were attended or ignored. Because RFs in PFC were largely confined to the left hemifield and those in V4 were confined to the lower left position, we restricted analysis to the stimulus in the lower left position. We used an SVM algorithm to classify the firing rate response patterns in a time resolved manner. Responses to two colors were included in the analysis as only two colors (randomly chosen on each day) could appear in the lower left position on each daily session. Thus, chance performance was at 50%. We selected sessions with the same two colors in that position that would give us the largest number of signals for the decoding analysis (SI Appendix, Supporting Methods). In vlPFC, decoding performance was above chance during the array presentation period (between 60 and 80%, Fig. 7A). More importantly, during the attention period, decoding performance exceeded chance only for the attended stimulus (permutation test P < 0.05, Fig. 7A). Attention increased color information around 190 ms after cue onset. Similar results were obtained when we assessed whether the orientation of the grating could be decoded from vlPFC responses (Fig. 7B). Note that the relevant feature for the behavioral response was the orientation and not the color of the grating and that in these trials the instruction was spatial. Thus, the color of the stimulus was neither used as an instruction nor as a response-related variable and was only one feature of the attended stimulus. Our results demonstrate that the vlPFC population encodes simple features such as color and orientation even when they are not behaviorally relevant, but that attention has a profound influence on the code; once attention is employed, only features of the attended stimulus are encoded by the vlPFC population. Features of ignored stimuli could not be decoded from population activity in vlPFC. By contrast, results from V4 indicated that the V4 population encoded color and orientation for stimuli in the RF irrespective of the locus of attention (SI Appendix, Fig. S10).
Fig. 7.
Influence of attention on decoding stimuli features from the vlPFC population during the array presentation period. (A) Mean decoding accuracy over time for stimulus color when attention is directed to that stimulus (red line) and when it is not (black line). Responses were classified into two classes with equal prior probabilities; thus, chance performance was at 50% and is shown by the horizontal dashed line. (B) Same as (A) but for orientation. (C) Same as (A) and (B) but for stimulus identity including the combination of color and orientation. In all graphs, the two vertical dashed lines indicate array onset and cue onset, respectively. The two horizontal color lines at the top of each graph show periods with significant decoding accuracy compared to chance for each line (permutation test, P < 0.05). The horizontal line at the bottom of each graph, shows time periods with significant differences between the two lines (cluster-based permutation test, P < 0.001).
These results highlight the role of vlPFC in encoding information that is relevant or potentially relevant for subsequent behavior. They also demonstrate that visual features are encoded in vlPFC as long as they are part of the attended stimulus regardless of the attended feature, in line with previous studies which have shown that attention acts on objects (3436). Indeed, decoding accuracies for the entire stimulus (combination of color and orientation) were above chance for the attended stimulus and not significantly different from chance for the ignored stimulus (Fig. 7C).

Beta and High-Gamma Bands Carry Information About Cue Identity in PFC.

A growing body of evidence suggests that attention and maintenance of information in memory also modulate the amplitude of local field potentials (LFPs) in specific frequency bands, both in visual and executive areas (3743). To test whether oscillatory activity in specific frequency bands carries information about the instruction cue, we decoded the information conveyed in LFP activity about the three spatial and the three color cues, during the cue period and the working memory period. A total of 337 and 100 LFP signals from vlPFC and FEF, respectively, were included in the decoding analysis for the spatial cue and 343 and 99, respectively, for the color cue. We found that spatial information could be robustly decoded from the high-gamma band (61–140 Hz) in both vlPFC and FEF, and from the beta/low-gamma band (20–40 Hz) in vlPFC (SI Appendix, Figs. S11 A and C and S12 A and C and Supporting Results). Significant spatial information was found in these frequency bands both during the cue and the delay period (cluster-based permutation test, P < 0.001). Information about the color cue could be robustly decoded from vlPFC activity in the high-gamma range, during both the cue and delay periods and from the beta/low-gamma band during the cue presentation period (SI Appendix, Figs. S11B and S12B). FEF LFP activity did not carry significant information about color in any time period (SI Appendix, Figs. S11D and S12D). Compared to PFC, the pattern in V4 was strikingly different with significant information about the cue carried mainly in the theta, alpha, and high-gamma frequency bands (SI Appendix, Supporting Results and Fig. S14).

Discussion

The present study provides strong evidence that instructing attention by spatial or color cues affects in different ways the temporal dynamics of the neural code in vlPFC. Moreover, we show that information about the location and the color of a future target is encoded in an anatomically specific manner. Neuronal ensembles in vlPFC encode both spatial and color information during presentation of the instruction cue as well as during maintenance of this information in working memory. By contrast, the FEF neuronal population encodes and stores robustly mainly information about the location of the future target. Color information could not be decoded robustly from the FEF population activity, although it was present at the level of individual units. This difference was evident both in spiking as well as in LFP activity in the two areas. Our results suggest that coding of attentional instructions in PFC is shaped by functional selectivity and can be implemented by different mechanisms for different features even within the same area.
Our recordings targeted the FEF and vlPFC, the latter largely corresponding to area 45A and potentially including parts of ventral 46 and area 12. Both FEF and vlPFC receive projections from dorsal as well as ventral stream visual areas and are thus well suited to encode spatial as well as nonspatial information that is behaviorally relevant in the context of an attention task (44, 45). Electrophysiological evidence has confirmed that PFC neurons show selectivity for stimuli features under passive viewing and no previous exposure to those stimuli (8). However, the majority of studies suggest that PFC neurons selectively encode information that is behaviorally relevant, highlighting the fact that task context can tune responses in PFC (5). Indeed, feature/object and spatial selectivity have been found in vlPFC and selectivity for the attentional template is evident in visual and mnemonic activity in this area (9, 24, 31, 33, 4649). Even in the FEF, where neurons do not typically exhibit selectivity for specific features such as color and shape, shape and color selectivity have been reported in the context of a visual search task (32, 50).
Such an adaptive coding scheme raises two questions. First, do neuronal ensembles across different PFC areas encode behaviorally relevant features in a similar manner? Our results draw a distinction between FEF and vlPFC by showing that the vlPFC population encodes both spatial and color instructions whereas the FEF population encodes robustly only spatial instructions even when both stimuli features are task relevant. This held true during both the cue and delay epochs. Thus, despite the ability of the PFC population to adaptively tune responses according to task demands, a relative selectivity for spatial and nonspatial attributes remains within PFC as previously suggested (51). Although our finding highlights the fact that the FEF population does not carry significant information about the color instruction, we found that a small but significant proportion of FEF units did show selectivity for color during the cue and memory periods. Previous studies have provided conflicting results with regard to stimulus selectivity in the FEF. On one hand, some studies have found little or no stimulus selectivity in FEF (24, 52, 53) with selectivity emerging as a result of behavioral relevance and saliency/feature attention (24, 26, 32, 54). On the other hand, other authors have found that FEF cells exhibit selectivity about stimulus features such as shape (55), color (56, 57) and motion (58). Although we found that a small percentage of color selective neurons in FEF with relatively sustained selectivity carried as a group some color information in the delay period, the overall low, transient and variable selectivity across all FEF units did not allow for robust decoding of color information at the population level. As a result, decoding accuracies from the FEF population exceeded chance only transiently. This is in contrast to the results of Panichello and Buschman (57) who found highly significant color information in the FEF population throughout the trial in a retrospective spatially cued task. Differences in task design may explain the apparent discrepancy between the two studies [e.g., the fact that animals were cued spatially and had to respond about the color of the target in (57), whereas an abstract, centrally presented color cue needed to be encoded in our task to select the stimulus whose orientation should be reported]. Considering the existing evidence, the emerging picture suggests that information about color does reach the FEF, however, color processing within the FEF is task dependent; encoding of color instructions for attentional guidance seems to be robust within vlPFC but not in FEF [see also (24)].
The second question raised in the context of an adaptive coding scheme is how cognitive requirements shape encoding of behaviorally relevant features. The majority of studies that have addressed temporal dynamics in PFC have shown that PFC neuronal ensembles change the way they encode information over the course of a trial (12, 13, 17). We show that the population activity state for a color instruction that indicates the identity of a future target is characterized by stronger dynamics compared to that for a spatial instruction. At the population level, spatial information is encoded in a rather stationary manner during presentation and memorization, both in FEF and vlPFC. Neurons that are selective for location carry this information throughout the trial under different cognitive demands. This is in accordance with previous studies (11, 51, 59, 60). Notably, activity for location in our task is unrelated to the response and thus represents a code for the spatial instruction rather than preparation for a motor response. By contrast, a change in the population activity state for color encoding between the cue and delay epochs suggests stronger dynamics for the color code. It should be noted that the percentage of color selective units was similar across the late cue and delay periods (SI Appendix, Fig. S2) thus, the change in the code is not likely to result from a difference in the number of units selective for color in the two periods. Interestingly, decoding accuracies for color were not appreciably different between the two periods (see Figs. 2A and 3B). A likely conclusion would therefore be a reconfiguration of the population code during the delay period that is specific for color and not for location. Consistent with this interpretation, it was recently found that feature information during visual stimulation and maintenance in memory is not necessarily carried by the same neurons in lateral PFC and area MST (48). Moreover, we found that a neuronal subpopulation with relatively sustained selectivity that essentially drove decoding for the entire population, underwent a rapid change in the network state during the transition to the delay period. Thus, other factors beyond selectivity also shape the dynamic code for color.
Why is this dynamic encoding more prominent for color than for location? One possibility is that this finding reflects differences in the way color and location information is represented in memory within PFC. However, our study cannot rule out alternative interpretations. There is considerable evidence that working memory representations are task-dependent and can dynamically change to match the current behavioral goal (57, 61). It is thus possible that the difference in spatial and color encoding during the memory period reflects differences in the behavioral strategy of the animals. It is conceivable that in the spatial cueing conditions, rather than memorizing the identity of the cue, subjects could perform the task by shifting attention toward the indicated location and that during the delay period, attention was maintained at that location. Such a strategy could explain the stationarity of the code across the two epochs. By contrast, in the color cueing condition, a shift of attention was not possible since the color cue did not predict the location of the future target. Accordingly, a transformation was likely required from encoding to memorization, which could result in a change in the code. Alternatively, subjects could employ feature attention in the color cueing condition during the delay period. In that case, it is possible that the differences we find in the dynamics of color and location coding, reflect differences in the functional networks and mechanisms that implement spatial and feature attention. Although it is not possible to distinguish between these different possibilities, our study shows how instructing attention by spatial or color cues affects the dynamics of the neural code within PFC both at the level of different regions as well as within the same region. Whether the difference in the neural code in PFC between spatial and color information holds also for other nonspatial features besides color and whether it is task dependent remains to be examined in future studies.
The dynamic nature of the neural code raises the question of how downstream neurons read out information from a population code with complex time-varying dynamics. It has been suggested that a high-dimensional state space contains low-dimensional subspaces in which stimulus representations may remain stable across time. Indeed, recent studies have provided evidence that complex working memory dynamics coexist within a stable memory subspace (21, 28). However, does this stable population coding within the memory subspace extend also into the stimulus presentation period? One study reported that it did, albeit projections to a dynamic subspace provided substantially better readout performance during cue and early delay epochs (21). Another study reported that a subspace defined from late delay activity, did not extend into the target presentation period or the early delay period (28). Our results are consistent with both studies. We found that the memory subspace provided above chance decoding performance during the cue epoch, yet, dynamic subspace projections that were redefined in each time window outperformed mnemonic projections in the cue and early delay periods. However, we did identify a shared subspace that maximized projected variance during both the cue and memory epochs and found that it was task relevant achieving high decoding accuracy throughout the trial. Thus, a stable readout of spatial and color information about the future target is possible from PFC population activity in a low-dimensional subspace. Although our results add to the literature that supports a time-invariant stable readout of information despite time-varying dynamics, testing whether the brain implements such a mechanism is a challenging and nontrivial job for future studies.
Our results are consistent with previous studies, which have shown stimulus-selective activity in PFC during working memory tasks (9, 24, 33, 4649). On the other hand, they contradict results from human studies that failed to decode stimulus-related features during working memory using multivoxel pattern classification analysis of BOLD signals in lateral PFC (62). This could be due to the low resolution of the BOLD signal and the largely absent topographical organization of feature processing in PFC at this scale (63, 64).
The strong involvement of vlPFC in encoding behaviorally relevant stimuli was further confirmed by our finding that only features of the attended and not of the ignored stimuli could be robustly decoded from vlPFC neuronal ensembles. This is in agreement with previous results, which have highlighted that many PFC cells encode attributes of task relevant objects and not features of ignored stimuli (31, 33) and that attention increases information about the attended stimuli (57). We also show that even irrelevant features of the attended object (in our case color) are encoded by vlPFC ensembles in line with previous studies, which have shown that attention acts on objects (3436). Thus, vlPFC does not merely encode the behavioral relevance of visual stimuli but rather the identity of stimuli that are behaviorally relevant. By contrast, V4 neurons encode the features and identity of stimuli in their RF regardless of their behavioral relevance. Information about the color, orientation, and stimulus identity could be decoded with accuracies reaching 100% both for attended and unattended stimuli. This does not contradict findings from previous studies, which have shown that attention affects stimuli representations in V4 [e.g., (40, 57, 65, 66)]. The very high accuracies in both conditions could be due to a ceiling effect that prevents the decoder from capturing the effect of attention on stimuli representations. Our results rather indicate that despite the change in the representation of stimuli with attention, unattended stimuli features can still be decoded robustly, from visual area V4 but not from PFC.
Our results further suggest how information about the future target is encoded in visual cortex. Although the V4 neuronal ensemble we recorded from had nonfoveal RFs and therefore was not directly stimulated during the cue period, we found that information about the color of the future target could be decoded from V4 population firing rates during the delay period (i.e., in the absence of visual stimulation). Multivariate pattern analysis of human functional imaging data has also shown that information held in memory can be robustly decoded from voxel-based activity patterns in human visual areas (6769). Electrophysiological studies in monkeys have provided variable results in favor and against the idea that neurons in midlevel visual areas hold a representation of stimulus-related activity in working memory (48, 49, 57, 70, 71). The fact that color information can be successfully decoded during the delay from spiking activity in V4, albeit with significantly lower accuracies compared to PFC, suggests that task relevant feature information can reach V4 color selective cells throughout the visual field. One possibility is that PFC, where the instruction is encoded, primes activity in color selective neurons in V4 throughout the visual field. This could be achieved indirectly through connections with IT and TEO or the posterior parietal cortex (45, 72, 73). It is less clear whether direct connections from vlPFC to V4 exist, which could also influence activity of color selective cells in V4. Inactivation of the ventral prearcuate area (VPA), which largely overlaps our vlPFC recording region, eliminates feature attention effects in V4 in a visual search task (25). Thus, vlPFC is a source of feature specific signals to V4. It is also possible that color selective V4 neurons with central RFs that encode the instruction cue transfer this information to color selective neurons with peripheral RFs within V4. Interestingly, the same was not true for spatial information. The ensemble of spatially selective V4 units did not carry significant information about the location of the future target in the absence of visual stimulation. This result is hardly surprising given that we sampled from V4 neurons with nonfoveal RFs, which were not stimulated by the central cue; however, it contrasts with results from the color cueing condition and further suggests that had the animals shifted attention to the peripheral location, activity modulation was not sufficient to decode the location of attention during the delay period.
Modulation of LFP activity in different frequency bands has been reported across many brain areas with behavior (3743). LFPs reflect voltage fluctuations that arise mainly from the average of postsynaptic potentials in a small cortical region around the recording electrode (74) and can therefore carry information complementary to that of spiking activity. Here, we found that spatial and color information could be robustly decoded from vlPFC LFP activity in the high-gamma and beta bands. This was largely true for both cue and delay periods. This is in line with previous studies. Anti-correlated bursts of high-gamma and beta activity have been linked to increased working memory information in spiking activity suggesting a potential mechanism for working memory control (38, 75). Moreover, a close relationship between spiking activity and high-gamma activity has been suggested in previous studies (76, 77) and a strong correlation of decoding accuracies obtained from spiking activity and LFP activity above 60 Hz has been reported (37). Our results are consistent with this by showing that significant information about location and color is carried both in spiking and high-gamma LFP activity in vlPFC. Accordingly, similar to decoding from spiking activity, decoding using LFP activity revealed that FEF LFPs carried significant information only about location. Our finding that beta band activity carries information about the spatial and color instruction is consistent with studies that have shown selectivity in beta LFP activity for rules and categories in PFC and for maintenance of spatial information in FEF beta frequencies during the memory period of a memory guided saccade task (7880). Interestingly, beta power was not selective for color during the memory period highlighting a role for beta in maintenance of spatial information in memory at least within vlPFC. This may seem at odds with previous reports on phase dependent coding of multiple objects in the high-beta band during working memory (41). It is, however, possible that memorization of a single color does not recruit the same mechanisms as memorization of multiple objects.
In V4, lower frequencies, in the theta and alpha bands, carried significant information about color and location during the cue period and these, together with high-gamma activity, were selective for color and location during the memory period. Selectivity for nonspatial visual features in theta, alpha and high-gamma bands in the memory period of a delayed match to sample task has been shown previously in MT (48). Enhanced activity in the theta band has also been found in V4 in the delay of a match to sample task (42). Given that during the delay, synaptic activity reflects feedback inputs and local processing, selectivity in theta oscillatory activity is likely to reflect feedback from IT or vlPFC. This is in agreement with previous results showing phase locking of spikes to theta oscillations across V4 and PFC during working memory (42). Future studies employing causal manipulations of activity could directly assess the source of these frequency-specific modulations. Selectivity in the high-gamma activity on the other hand, during the delay, was lower and is likely to reflect sparse V4 spiking activity carrying task relevant information.
Overall, our results demonstrate that spatial and color attentional instructions that are held in memory can be decoded robustly from both spiking and LFP vlPFC activity. Moreover, they reveal that dynamic and stable coding of attentional instructions can coexist in activity patterns of PFC ensembles. These results may have strong implications on how networks are shaped according to behavioral demands [see also (81)]. Future studies should examine how the herein described differences in encoding task relevant information may shape functional connectivity between visual and prefrontal areas based on behavioral goals.

Materials and Methods

All animal experiments were approved by the relevant authorities. Animals were trained in cued covert attention tasks with spatial or color cueing. Simultaneous extracellular recordings from multiple electrodes were carried out from vlPFC, FEF, and visual area V4. Firing rate and LFP analyses were performed with custom codes in MATLAB. Detailed methods on subjects, behavioral tasks, electrophysiological recordings, and analyses are provided in SI Appendix, Supporting Methods.

Data, Materials, and Software Availability

Matlab data files and code that reproduce the main figures in the paper have been deposited in a publicly available GitHub repository (https://github.com/gregorioulab/PNAS2022) (82).

Acknowledgments

We thank Alexandra Antoniadou for help with the analysis on the influence of attention on decoding accuracies. This research was supported by a grant co-financed by Greece and the European Union (European Regional Development Fund) through the Operational Programme “Competitiveness Entrepreneurship Innovation 2014–2020” in the context of project MIS 5070462 (to G.G.G.), by a grant co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning 2014–2020” in the context of project MIS 5048179 (to G.G.G.), and by a grant to P.S. from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), under the “1st Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers” (Project No 1199).

Supporting Information

Appendix 01 (PDF)
Movie S1
Illustration of neuronal population dynamics. 3-dimensional PCA space projections of the original N-dimensional vlPFC population state trajectories. Each of the three trajectories corresponds to a different spatial (upper left, lower left and lower right) or color (red, green, blue) cue. Colors of each trajectory correspond to different task epochs: fixation (grey), cue (red), early delay (green), late delay (blue). A rapid increase in velocity following cue onset is evident for both spatial and color cues. At the onset of the delay period, a larger increase in velocity was observed for the color compared to the spatial cue. Distance and velocity changes in activation state were quantified in Fig. 4. Note that the quantities in Fig. 4 were calculated in the original N-dimensional space. PCA projections shown here are for illustration purposes only.

References

1
A. Baddeley, Working memory: Looking back and looking forward. Nat. Rev. Neurosci. 4, 829–839 (2003).
2
C. Constantinidis, E. Procyk, The primate working memory networks. Cogn. Affect. Behav. Neurosci. 4, 444–465 (2004).
3
E. K. Miller, J. D. Cohen, An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 24, 167–202 (2001).
4
S. Paneri, G. G. Gregoriou, Top-down control of visual attention by the prefrontal cortex. Functional specialization and long-range interactions. Front. Neurosci. 11, 545 (2017).
5
J. Duncan, An adaptive coding model of neural function in prefrontal cortex. Nat. Rev. Neurosci. 2, 820–829 (2001).
6
S. M. Courtney, L. G. Ungerleider, K. Keil, J. V. Haxby, Object and spatial visual working memory activate separate neural systems in human cortex. Cereb. Cortex 6, 39–49 (1996).
7
F. A. Wilson, S. P. Scalaidhe, P. S. Goldman-Rakic, Dissociation of object and spatial processing domains in primate prefrontal cortex. Science 260, 1955–1958 (1993).
8
M. R. Riley, X. L. Qi, C. Constantinidis, Functional specialization of areas along the anterior-posterior axis of the primate prefrontal cortex. Cereb. Cortex 27, 3683–3697 (2017).
9
S. C. Rao, G. Rainer, E. K. Miller, Integration of what and where in the primate prefrontal cortex. Science 276, 821–824 (1997).
10
M. Rigotti et al., The importance of mixed selectivity in complex cognitive tasks. Nature 497, 585–590 (2013).
11
A. H. Lara, J. D. Wallis, Executive control processes underlying multi-item working memory. Nat. Neurosci. 17, 876–883 (2014).
12
E. M. Meyers, D. J. Freedman, G. Kreiman, E. K. Miller, T. Poggio, Dynamic population coding of category information in inferior temporal and prefrontal cortex. J. Neurophysiol. 100, 1407–1419 (2008).
13
E. M. Meyers, X. L. Qi, C. Constantinidis, Incorporation of new information into prefrontal cortical activity after learning working memory tasks. Proc. Natl. Acad. Sci. U.S.A. 109, 4651–4656 (2012).
14
S. Tremblay, F. Pieper, A. Sachs, J. Martinez-Trujillo, Attentional filtering of visual information by neuronal ensembles in the primate lateral prefrontal cortex. Neuron 85, 202–215 (2015).
15
A. B. Graf, A. Kohn, M. Jazayeri, J. A. Movshon, Decoding the activity of neuronal populations in macaque primary visual cortex. Nat. Neurosci. 14, 239–245 (2011).
16
D. A. Markowitz, Y. T. Wong, C. M. Gray, B. Pesaran, Optimizing the decoding of movement goals from local field potentials in macaque cortex. J. Neurosci. 31, 18412–18422 (2011).
17
M. G. Stokes et al., Dynamic coding for cognitive control in prefrontal cortex. Neuron 78, 364–375 (2013).
18
A. C. Snyder, B. M. Yu, M. A. Smith, A stable population code for attention in prefrontal cortex leads a dynamic attention code in visual cortex. J. Neurosci. 41, 9163–9176 (2021).
19
E. Astrand, G. Ibos, J. R. Duhamel, S. Ben Hamed, Differential dynamics of spatial attention, position, and color coding within the parietofrontal network. J. Neurosci. 35, 3174–3189 (2015).
20
D. A. Crowe, B. B. Averbeck, M. V. Chafee, Rapid sequences of population activity patterns dynamically encode task-critical spatial information in parietal cortex. J. Neurosci. 30, 11640–11653 (2010).
21
J. D. Murray et al., Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex. Proc. Natl. Acad. Sci. U.S.A. 114, 394–399 (2017).
22
C. Wardak, G. Ibos, J. R. Duhamel, E. Olivier, Contribution of the monkey frontal eye field to covert visual attention. J. Neurosci. 26, 4228–4235 (2006).
23
T. Moore, M. Fallah, Control of eye movements and spatial attention. Proc. Natl. Acad. Sci. U.S.A. 98, 1273–1276 (2001).
24
N. P. Bichot, M. T. Heard, E. M. DeGennaro, R. Desimone, A source for feature-based attention in the prefrontal cortex. Neuron 88, 832–844 (2015).
25
N. P. Bichot, R. Xu, A. Ghadooshahy, M. L. Williams, R. Desimone, The role of prefrontal cortex in the control of feature attention in area V4. Nat. Commun. 10, 5727 (2019).
26
P. Sapountzis, S. Paneri, G. G. Gregoriou, Distinct roles of prefrontal and parietal areas in the encoding of attentional priority. Proc. Natl. Acad. Sci. U.S.A. 115, E8755–E8764 (2018).
27
E. Astrand, C. Wardak, S. Ben Hamed, Selective visual attention to drive cognitive brain-machine interfaces: From concepts to neurofeedback and rehabilitation applications. Front. Syst. Neurosci. 8, 144 (2014).
28
A. Parthasarathy et al., Time-invariant working memory representations in the presence of code-morphing in the lateral prefrontal cortex. Nat. Commun. 10, 4995 (2019).
29
Y. Xie et al., Geometry of sequence working memory in macaque prefrontal cortex. Science 375, 632–639 (2022).
30
X. Jiang, H. Saggar, S. I. Ryu, K. V. Shenoy, J. C. Kao, Structure in neural activity during observed and executed movements is shared at the neural population level, not in single neurons. Cell Rep. 32, 108006 (2020).
31
S. Everling, C. J. Tinsley, D. Gaffan, J. Duncan, Selective representation of task-relevant objects and locations in the monkey prefrontal cortex. Eur. J. Neurosci. 23, 2197–2214 (2006).
32
N. P. Bichot, J. D. Schall, K. G. Thompson, Visual feature selectivity in frontal eye fields induced by experience in mature macaques. Nature 381, 697–699 (1996).
33
G. Rainer, W. F. Asaad, E. K. Miller, Selective representation of relevant information by neurons in the primate prefrontal cortex. Nature 393, 577–579 (1998).
34
K. M. O’Craven, P. E. Downing, N. Kanwisher, fMRI evidence for objects as the units of attentional selection. Nature 401, 584–587 (1999).
35
E. Blaser, Z. W. Pylyshyn, A. O. Holcombe, Tracking an object through feature space. Nature 408, 196–199 (2000).
36
M. A. Schoenfeld et al., Dynamics of feature binding during object-selective attention. Proc. Natl. Acad. Sci. U.S.A. 100, 11806–11811 (2003).
37
S. Tremblay, G. Doucet, F. Pieper, A. Sachs, J. Martinez-Trujillo, Single-trial decoding of visual attention from local field potentials in the primate lateral prefrontal cortex is frequency-dependent. J. Neurosci. 35, 9038–9049 (2015).
38
M. Lundqvist et al., Gamma and beta bursts underlie working memory. Neuron 90, 152–164 (2016).
39
P. Fries, Neuronal gamma-band synchronization as a fundamental process in cortical computation. Annu. Rev. Neurosci. 32, 209–224 (2009).
40
G. G. Gregoriou, S. J. Gotts, H. Zhou, R. Desimone, High-frequency, long-range coupling between prefrontal and visual cortex during attention. Science 324, 1207–1210 (2009).
41
M. Siegel, M. R. Warden, E. K. Miller, Phase-dependent neuronal coding of objects in short-term memory. Proc. Natl. Acad. Sci. U.S.A. 106, 21341–21346 (2009).
42
S. Liebe, G. M. Hoerzer, N. K. Logothetis, G. Rainer, Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance. Nat. Neurosci. 15, 456–462, S451-452 (2012).
43
I. C. Fiebelkorn, M. A. Pinsk, S. Kastner, A dynamic interplay within the frontoparietal network underlies rhythmic spatial attention. Neuron 99, 842–853.e8 (2018).
44
J. D. Schall, A. Morel, D. J. King, J. Bullier, Topography of visual cortex connections with frontal eye field in macaque: Convergence and segregation of processing streams. J. Neurosci. 15, 4464–4487 (1995).
45
M. J. Webster, J. Bachevalier, L. G. Ungerleider, Connections of inferior temporal areas TEO and TE with parietal and frontal cortex in macaque monkeys. Cereb. Cortex 4, 470–483 (1994).
46
E. Hoshi, K. Shima, J. Tanji, Task-dependent selectivity of movement-related neuronal activity in the primate prefrontal cortex. J. Neurophysiol. 80, 3392–3397 (1998).
47
M. Sakagami et al., A code for behavioral inhibition on the basis of color, but not motion, in ventrolateral prefrontal cortex of macaque monkey. J. Neurosci. 21, 4801–4808 (2001).
48
D. Mendoza-Halliday, S. Torres, J. C. Martinez-Trujillo, Sharp emergence of feature-selective sustained activity along the dorsal visual pathway. Nat. Neurosci. 17, 1255–1262 (2014).
49
D. Zaksas, T. Pasternak, Directional signals in the prefrontal cortex and in area MT during a working memory for visual motion task. J. Neurosci. 26, 11726–11742 (2006).
50
K. A. Lowe, J. D. Schall, Sequential operations revealed by serendipitous feature selectivity in frontal eye field. bioRxiv https://doi.org/10.1101/683144, 683144 (2019).
51
X. L. Qi, C. Constantinidis, Neural changes after training to perform cognitive tasks. Behav. Brain Res. 241, 235–243 (2013).
52
C. W. Mohler, M. E. Goldberg, R. H. Wurtz, Visual receptive fields of frontal eye field neurons. Brain Res. 61, 385–389 (1973).
53
J. D. Schall, D. P. Hanes, K. G. Thompson, D. J. King, Saccade target selection in frontal eye field of macaque. I. Visual and premovement activation. J. Neurosci. 15, 6905–6918 (1995).
54
N. P. Bichot, J. D. Schall, Effects of similarity and history on neural mechanisms of visual selection. Nat. Neurosci. 2, 549–554 (1999).
55
X. Peng, M. E. Sereno, A. K. Silva, S. R. Lehky, A. B. Sereno, Shape selectivity in primate frontal eye field. J. Neurophysiol. 100, 796–814 (2008).
56
M. Siegel, T. J. Buschman, E. K. Miller, Cortical information flow during flexible sensorimotor decisions. Science 348, 1352–1355 (2015).
57
M. F. Panichello, T. J. Buschman, Shared mechanisms underlie the control of working memory and attention. Nature 592, 601–605 (2021).
58
Q. Xiao, A. Barborica, V. P. Ferrera, Radial motion bias in macaque frontal eye field. Vis. Neurosci. 23, 49–60 (2006).
59
S. Funahashi, C. J. Bruce, P. S. Goldman-Rakic, Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J. Neurophysiol. 61, 331–349 (1989).
60
K. Kubota, H. Niki, Prefrontal cortical unit activity and delayed alternation performance in monkeys. J. Neurophysiol. 34, 337–347 (1971).
61
C. Tang, R. Herikstad, A. Parthasarathy, C. Libedinsky, S. C. Yen, Minimally dependent activity subspaces for working memory and motor preparation in the lateral prefrontal cortex. eLife 9, e58154 (2020).
62
A. C. Riggall, B. R. Postle, The relationship between working memory storage and elevated activity as measured with functional magnetic resonance imaging. J. Neurosci. 32, 12990–12998 (2012).
63
M. R. Riley, C. Constantinidis, Role of prefrontal persistent activity in working memory. Front. Syst. Neurosci. 9, 181 (2016).
64
P. Sapountzis, D. Schluppeck, R. Bowtell, J. W. Peirce, A comparison of fMRI adaptation and multivariate pattern classification analysis in visual cortex. Neuroimage 49, 1632–1640 (2010).
65
J. H. Reynolds, L. Chelazzi, R. Desimone, Competitive mechanisms subserve attention in macaque areas V2 and V4. J. Neurosci. 19, 1736–1753 (1999).
66
J. H. Reynolds, D. J. Heeger, The normalization model of attention. Neuron 61, 168–185 (2009).
67
K. K. Sreenivasan, J. Vytlacil, M. D’Esposito, Distributed and dynamic storage of working memory stimulus information in extrastriate cortex. J. Cogn. Neurosci. 26, 1141–1153 (2014).
68
E. F. Ester, D. E. Anderson, J. T. Serences, E. Awh, A neural measure of precision in visual working memory. J. Cogn. Neurosci. 25, 754–761 (2013).
69
S. A. Harrison, F. Tong, Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632–635 (2009).
70
J. W. Bisley, D. Zaksas, J. A. Droll, T. Pasternak, Activity of neurons in cortical area MT during a memory for motion task. J. Neurophysiol. 91, 286–300 (2004).
71
V. P. Ferrera, K. K. Rudolph, J. H. Maunsell, Responses of neurons in the parietal and temporal visual pathways during a motion task. J. Neurosci. 14, 6171–6186 (1994).
72
L. G. Ungerleider, T. W. Galkin, R. Desimone, R. Gattass, Cortical connections of area V4 in the macaque. Cereb. Cortex 18, 477–499 (2008).
73
C. Cavada, P. S. Goldman-Rakic, Posterior parietal cortex in rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. J. Comp. Neurol. 287, 422–445 (1989).
74
G. Buzsáki, C. A. Anastassiou, C. Koch, The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 13, 407–420 (2012).
75
M. Lundqvist, P. Herman, M. R. Warden, S. L. Brincat, E. K. Miller, Gamma and beta bursts during working memory readout suggest roles in its volitional control. Nat. Commun. 9, 394 (2018).
76
S. Ray, J. H. Maunsell, Different origins of gamma rhythm and high-gamma activity in macaque visual cortex. PLoS Biol. 9, e1000610 (2011).
77
E. J. Hwang, R. A. Andersen, Effects of visual stimulation on LFPs, spikes, and LFP-spike relations in PRR. J. Neurophysiol. 105, 1850–1860 (2011).
78
E. G. Antzoulatos, E. K. Miller, Synchronous beta rhythms of frontoparietal networks support only behaviorally relevant representations. eLife 5, e17822 (2016).
79
G. G. Gregoriou, S. J. Gotts, R. Desimone, Cell-type-specific synchronization of neural activity in FEF with V4 during attention. Neuron 73, 581–594 (2012).
80
T. J. Buschman, M. Siegel, J. E. Roy, E. K. Miller, Neural substrates of cognitive capacity limitations. Proc. Natl. Acad. Sci. U.S.A. 108, 11252–11255 (2011).
81
C. J. MacDowell, S. Tafazoli, T. J. Buschman, A Goldilocks theory of cognitive control: Balancing precision and efficiency with low-dimensional control states. Curr. Opin. Neurobiol. 76, 102606 (2022).
82
P. Sapountzis, S. Paneri, S. Papadopoulos, G. G. Gregoriou, PNAS2022. GitHub. https://github.com/gregorioulab/PNAS2022. Deposited 14 September 2022.

Information & Authors

Information

Published in

Go to Proceedings of the National Academy of Sciences
Go to Proceedings of the National Academy of Sciences
Proceedings of the National Academy of Sciences
Vol. 119 | No. 40
October 4, 2022
PubMed: 36161937

Classifications

Data, Materials, and Software Availability

Matlab data files and code that reproduce the main figures in the paper have been deposited in a publicly available GitHub repository (https://github.com/gregorioulab/PNAS2022) (82).

Submission history

Received: February 14, 2022
Accepted: September 6, 2022
Published online: September 26, 2022
Published in issue: October 4, 2022

Change history

April 2, 2024: The Acknowledgments section has been updated to include a missing funding source. Previous version (September 26, 2022)

Keywords

  1. frontal eye field
  2. ventrolateral prefrontal cortex
  3. visual area V4
  4. visual attention
  5. working memory

Acknowledgments

We thank Alexandra Antoniadou for help with the analysis on the influence of attention on decoding accuracies. This research was supported by a grant co-financed by Greece and the European Union (European Regional Development Fund) through the Operational Programme “Competitiveness Entrepreneurship Innovation 2014–2020” in the context of project MIS 5070462 (to G.G.G.), by a grant co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning 2014–2020” in the context of project MIS 5048179 (to G.G.G.), and by a grant to P.S. from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), under the “1st Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers” (Project No 1199).

Notes

This article is a PNAS Direct Submission.

Authors

Affiliations

Panagiotis Sapountzis2,1 [email protected]
Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Heraklion, Crete, 70013 Greece
Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete, 71003 Greece
Sofia Paneri1
Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Heraklion, Crete, 70013 Greece
Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete, 71003 Greece
Sotirios Papadopoulos
Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete, 71003 Greece
Present address: University Lyon 1, Lyon Neuroscience Research Center, CRNL, INSERM U1028, CNRS UMR5292, Lyon, F-69000, France and Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR5229, 69500 Bron, France.
Institute of Applied and Computational Mathematics, Foundation for Research and Technology Hellas, Heraklion, Crete, 70013 Greece
Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete, 71003 Greece

Notes

2
To whom correspondence may be addressed. Email: [email protected] or [email protected].
Author contributions: P.S., S. Paneri, and G.G.G. designed research; S. Paneri and G.G.G. performed research; P.S., S. Paneri, and S. Papadopoulos contributed new reagents/analytic tools and analyzed data; and P.S. and G.G.G. wrote the paper.
1
P.S. and S. Paneri contributed equally to this work.

Competing Interests

The authors declare no competing interest.

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements




Altmetrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to get full access to it.

    Single Article Purchase

    Dynamic and stable population coding of attentional instructions coexist in the prefrontal cortex
    Proceedings of the National Academy of Sciences
    • Vol. 119
    • No. 40

    Media

    Figures

    Tables

    Other

    Share

    Share

    Share article link

    Share on social media