Strange IndiaStrange India


Subjects

All procedures were conducted according to the National Institutes of Health guidelines for animal care and use and approved by the Institutional Animal Care and Use Committee at Stanford University School of Medicine and the University of California, Irvine. For subiculum imaging, eight Camk2a-Cre; Ai163 (ref. 36) mice (four male and four female), one Camk2-Cre mouse (female, JAX: 005359) and one C57BL/6 mouse (male) were used. For the Camk2-Cre mouse, AAV1-CAG-FLEX-GCaMP7f was injected in the right subiculum at anteroposterior (AP): −3.40 mm; lateromedial (ML): +1.88 mm; and dorsoventral (DV): −1.70 mm. For the C57BL/6 mouse, AAV1-Camk2a-GCaMP6f was injected in the right subiculum at the same coordinates. For CA1 imaging, 12 Ai94; Camk2a-tTA; Camk2a-Cre (JAX id: 024115 and 005359) mice (seven male and five female) were used. Mice were group housed with same-sex littermates until the time of surgery. At the time of surgery, mice were 8–12 weeks old. After surgery mice were singly housed at 21–22°C and 29–41% humidity. Mice were kept on a 12-hour light/dark cycle and had ad libitum access to food and water in their home cages at all times. All experiments were carried out during the light phase. Data from both males and females were combined for analysis, as we did not observe sex differences in, for example, corner cell proportions, spike rates to different corners angles, and concavity and convexity.

GRIN lens implantation and baseplate placement

Mice were anesthetized with continuous 1–1.5% isoflurane and head fixed in a rodent stereotax. A three-axis digitally controlled micromanipulator guided by a digital atlas was used to determine bregma and lambda coordinates. To implant the gradient refractive index (GRIN) lens above the subiculum, a 1.8-mm-diameter circular craniotomy was made over the posterior cortex (centred at −3.28 mm anterior/posterior and +2 mm medial/lateral, relative to bregma). For CA1 imaging, the GRIN lens was implanted above the CA1 region of the hippocampus centred at −2.30 mm anterior/posterior (AP) and +1.75 mm medial/lateral (ML), relative to bregma. The dura was then gently removed and the cortex directly below the craniotomy aspirated using a 27- or 30-gauge blunt syringe needle attached to a vacuum pump under constant irrigation with sterile saline. The aspiration removed the corpus callosum and part of the dorsal hippocampal commissure above the imaging window but left the alveus intact. Excessive bleeding was controlled using a haemostatic sponge that had been torn into small pieces and soaked in sterile saline. The GRIN lens (0.25 pitch, 0.55 NA, 1.8 mm diameter and 4.31 mm in length, Edmund Optics) was then slowly lowered with a stereotaxic arm to the subiculum to a depth of −1.75 mm relative to the measurement of the skull surface at bregma. The GRIN lens was then fixed with cyanoacrylate and dental cement. Kwik-Sil (World Precision Instruments) was used to cover the lens at the end of surgery. Two weeks after the implantation of the GRIN lens, a small aluminium baseplate was cemented to the animal’s head on top of the existing dental cement. Specifically, Kwik-Sil was removed to expose the GRIN lens. A miniscope was then fitted into the baseplate and locked in position so that the GCaMP-expressing neurons and visible landmarks, such as blood vessels, were in focus in the field of view. After the installation of the baseplate, the imaging window was fixed for long-term, in respect to the miniscope used during installation. Thus, each mouse had a dedicated miniscope for all experiments. When not imaging, a plastic cap was placed in the baseplate to protect the GRIN lens from dust and dirt.

Behavioural experiments with imaging

After mice had fully recovered from the surgery, they were handled and allowed to habituate to wearing the head-mounted miniscope by freely exploring an open arena for 20 min every day for one week. The actual experiments took place in a different room from the habituation. The behaviour rig, an 80/20 built compartment, in this dedicated room had two white walls and one black wall with salient decorations as distal visual cues, which were kept constant over the course of the entire study. For experiments described below, all the walls of the arenas were acrylic and were tightly wrapped with black paper by default to reduce potential reflections from the LEDs on the scope. A local visual cue was always available on one of the walls in the arena, except for the oval environment. In each experiment, the floors of the arenas were covered with corn bedding. All animals’ movements were voluntary.

Circle, equilateral triangle, square, hexagon and low-wall square

This set of experiments was carried out in a circle, an equilateral triangle, a square, a hexagon and a low-wall square environment. The diameter of the circle was 35 cm. The side lengths were 30 cm for the equilateral triangle and square, and 18.5 cm for the hexagon. The height of all the environments was 30 cm except for the low-wall square, which was 15 cm. In total, we conducted 15, 18, 17, 18 and 12 sessions (20 min per session) from nine mice in the circular, triangular, square, hexagonal and low-wall square arenas, respectively. We recorded a maximum of two sessions per condition per mouse. For each mouse, we recorded 1–2 sessions in each day. If two sessions were made from the same animal on a given day, recordings were carried out from different conditions with at least a two-hour gap between sessions. For each mouse, data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked across the sessions. As described above, all the walls of the arenas were black. A local visual cue (strips of white masking tape) was present on one wall of each arena, covering the top half of the wall. For CA1 imaging, mice were placed into a familiar 25 × 25 cm square environment for a single, 20 min session recording.

Trapezoid and 30-60-90 right triangle

This set of experiments was carried out in a right triangle (30°, 60°, 90°) and a trapezoid environment. Corner angles from the trapezoid were 55°, 90°, 90° and 125°. The dimensions of the mazes were 46 (L) × 28 (W) × 30 (H) cm. In total, we conducted 16 sessions each (25 min per session) from eight mice for the right triangle and trapezoid. Data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked across the sessions for each mouse. Other recording protocols were the same as described above.

Insertion of a discrete corner in a square environment

This set of experiments was carried out in a large square environment with dimensions of 40 (L) × 40 (W) × 40 (H) cm. The experiments comprised a baseline session followed by four sessions with the insertion of a discrete corner into the square maze. In these sessions, the walls that formed the discrete corner were gradually separated by 0, 1.5, 3 and 6 cm. Starting from 3 cm, the animals were able to pass through the gap without difficulty. The dimensions of the inserted walls were 15 (W) × 30 (H) cm. For each condition, we recorded eight sessions (30 min per session) from eight mice by conducting a single session from each mouse per day. Data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked throughout the sessions.

Square, rectangle, convex-1, convex-2, convex-3 and convex-m1

This set of experiments was carried out in a large square, rectangle and multiple convex environments that contained both concave and convex corners. The dimensions of the square were 40 (L) × 40 (W) × 40 (H) cm and the rectangle were 46 (L) × 28 (W) × 30 (H) cm. The convex arenas were all constructed based on the square environment using wood blocks or PVC sheets that were tightly wrapped with the same black paper. There convex corners had angles at 270° and 315° in the convex environments. Note that, for four out of ten mice, their convex-2 and -3 arenas were constructed in a mirrored layout compared to the arenas of the other six mice to control for any potential biases that could arise from the specific geometric configurations in the environment (Fig. 4c). For convex-m1 (Extended Data Fig. 7b), the northeast convex corner was decorated with white, rough surface masking tape from the bottom all the way up to the top of the corner. For each condition, we recorded ten sessions (30 min per session) from ten mice, a single session from each mouse per day. For each mouse, data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked across all the sessions.

Convex environment with an obtuse convex corner

This set of experiments was carried out in a convex environment that contained two 270° convex corners and one 225° convex corner (Extended Data Fig. 7e). The arena was constructed in the same manner as the other convex environments described above. For two days, we recorded a total of 18 sessions (30 min per session) from nine mice, two sessions per mouse. Please note, although the maze was rotated by 90° in the second session, we combined the two sessions together for the analysis.

Triangular and cylindrical objects

This set of experiments was first carried out in the convex-1 environment, followed by a 40 cm square environment containing two discrete objects (Extended Data Fig. 7h). The first object was an isosceles right triangle with the hypotenuse side measuring 20 cm in length and 7 cm in height (occasionally, animals climbed on top of the object). The second object was a cylinder with a diameter of 3 cm and a height of 14 cm. For this experiment, we recorded a total of eight sessions (30 min per session) from eight mice for each environment.

Shuttle box

The shuttle box consisted of two connected, 25 (L) × 25 (W) × 25 (H) cm compartments with distinct colours and visual cues (Extended Data Fig. 6a). The opening in the middle was 6.5 cm wide, so that the mouse could easily run between the two compartments during miniscope recordings. The black compartment was wrapped in black paper, but not the grey compartment. For two days, we recorded a total of 18 sessions (20 min per session) from nine mice, two sessions per mouse.

Recordings in the dark or with trimmed whiskers

This set of experiments was carried out in a square environment with dimensions of 30 (L) × 30 (W) × 30 × (H) cm. The animals had experience in the environment before this experiment. The experiments consisted of three sessions: a baseline session, a session recorded in complete darkness, and a session recorded after the mice’s whiskers were trimmed. For the dark recording, the ambient light was turned off immediately after the animal was placed inside the square box. The red LED (approximately 650 nm) on the miniscope was covered by black masking tape. This masking did not completely block the red light, so the behavioural camera could still detect the animal’s position. Before the masking, the intensity of the red LED was measured as approximately 12 lux from the distance to the animal’s head. However, after the masking, the intensity of the masked red LED was comparable to the measurement taken with the light metre sensor blocked (complete darkness, approximately 2 lux). The blue LED on the miniscope was completely blocked from the outside. For the whisker-trimmed session, facial whiskers were trimmed (not epilated) with scissors until no visible whiskers remained on the face 12 h before the recording. For each condition, we recorded nine sessions (20 min per session) from nine mice by conducting a single session from each mouse per day. For each mouse, data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked across these sessions. Note that according to previous reports50,51,52, the number of hippocampal place cells decrease in both darkness and whisker trimming conditions.

Square and oval

This set of experiments was carried out in the 30 cm square environment (day 1) and an oval environment (days 2 and 3) (Fig. 5a). The oval environment had an elliptical shape, with its major axis measuring 36 cm and minor axis measuring 23 cm. Notably, the oval experiment on day 3 was rotated 90° relative to day 2 (Fig. 5a). For each condition, we recorded nine sessions (25 min per session) from nine mice, a single session from each mouse per day. For each mouse, data from this set of experiments were aligned and concatenated, and the activity of neurons was tracked across all the sessions. Data from both the oval and rotated oval conditions were combined for analysis.

Two cylindrical objects

This set of experiments was first carried out in the convex-1 environment, followed by a 46 (L) × 28 (W) × 30 (H) cm rectangle environment containing two cylindrical objects (Fig. 5c). The first cylinder had a diameter of 3 cm and a height of 14 cm, while the second cylinder had a diameter of 9 cm and a height of 14 cm. For this experiment, we recorded a total of seven sessions (30 min per session) for each environment from seven mice.

Miniscope imaging data acquisition and preprocessing

Technical details for the custom-constructed miniscopes and general processing analyses are described in32,37,53 and at http://miniscope.org/index.php/Main_Page. In brief, this head-mounted scope had a mass of about 3 g and a single, flexible coaxial cable that carried power, control signals and imaging data to the miniscope open-source data acquisition (DAQ) hardware and software. In our experiments, we used Miniscope v.3, which had a 700 μm × 450 μm field of view with a resolution of 752 pixels × 480 pixels (approximately 1 μm per pixel). For subiculum imaging, we measured the effective image size (the area with detectable neurons) for each mouse and combined this information with histology. The anatomical region where neurons were recorded was approximately within a 450-μm diameter circular area centred around AP: −3.40 mm and ML: +2 mm. Owing to the limitations of 1-photon imaging, we believe the recordings were primarily from the deep layer of the subiculum. Images were acquired at approximately 30 frames per second (fps) and recorded to uncompressed avi files. The DAQ software also recorded the simultaneous behaviour of the mouse through a high-definition webcam (Logitech) at approximately 30 fps, with time stamps applied to both video streams for offline alignment.

For each set of experiments, miniscope videos of individual sessions were first concatenated and down-sampled by a factor of two, then motion corrected using the NoRMCorre MATLAB package54. To align the videos across different sessions for each animal, we applied an automatic two-dimensional (2D) image registration method (github.com/fordanic/image-registration) with rigid xy translations according to the maximum intensity projection images for each session. The registered videos for each animal were then concatenated together in chronological order to generate a combined dataset for extracting calcium activity.

To extract the calcium activity from the combined dataset, we used extended constrained non-negative matrix factorization for endoscopic data (CNMF-E)38,55, which enables simultaneous denoising, deconvolving and demixing of calcium imaging data. A key feature includes modelling the large, rapidly fluctuating background, allowing good separation of single-neuron signals from background and the separation of partially overlapping neurons by taking a neuron’s spatial and temporal information into account (see ref. 38 for details). A deconvolution algorithm called OASIS39 was then applied to obtain the denoised neural activity and deconvolved spiking activity (Extended Data Fig. 1b). These extracted calcium signals for the combined dataset were then split back into each session according to their individual frame numbers. As the combined dataset was large (greater than 10 GB), we used the Sherlock HPC cluster hosted by Stanford University to process the data across 8–12 cores and 600–700 GB of RAM. While processing this combined dataset required significant computing resources, it enhanced our ability to track cells across sessions from different days. This process made it unnecessary to perform individual footprint alignment or cell registration across sessions. The position, head direction and speed of the animals were determined by applying a custom MATLAB script to the animal’s behavioural tracking video. Time points at which the speed of the animal was lower than 2 cm s−1 were identified and excluded from further analysis. We then used linear interpolation to temporally align the position data to the calcium imaging data.

Corner cell analyses

Calculation of spatial rate maps

After we obtained the deconvolved spiking activity of neurons, we binarized it by applying a threshold using a ×3 standard deviation of all the deconvolved spiking activity for each neuron. The position data was sorted into 1.6 × 1.6 cm non-overlapping spatial bins. The spatial rate map for each neuron was constructed by dividing the total number of calcium spikes by the animal’s total occupancy in a given spatial bin. The rate maps were smoothed using a 2D convolution with a Gaussian filter that had a standard deviation of two.

Corner score for each field

To detect spatial fields in a given rate map, we first applied a threshold to filter the rate map. After filtering, each connected pixel region was considered a place field, and the x and y coordinates of the regional maxima for each field were the locations of the fields. We used a filtering threshold of 0.3 times the maximum spike rate for identifying corner cells in smaller environments (for example, the circle, triangle, square and hexagon), and a filtering threshold of 0.4 for identifying corner cells in larger environments (for example, 40 cm square, rectangle and convex environments, Fig. 4). These thresholds were determined from a search of threshold values that ranged from 0.1–0.6. The threshold range that resulted in the best corner cell classification, as determined by the overall firing-rate difference between the corner and the centroid of an environment (for example, Fig. 1h), was 0.3–0.4 across different environments. The coordinates of the centroid and corners of the environments were automatically detected with manual corrections. For each field, we defined the corner score as:

$${{\rm{c}}{\rm{o}}{\rm{r}}{\rm{n}}{\rm{e}}{\rm{r}}{\rm{s}}{\rm{c}}{\rm{o}}{\rm{r}}{\rm{e}}}_{{\rm{f}}{\rm{i}}{\rm{e}}{\rm{l}}{\rm{d}}}=\frac{d1-d2}{d1+d2}$$

where d1 is the distance between the environmental centroid and the field, and d2 is the distance between the field and the nearest environmental corner. The score ranges from −1 for fields situated at the centroid of the arena to +1 for fields perfectly located at a corner (Extended Data Fig. 1f).

Corner score for each cell

There were two situations that needed to be considered when calculating the corner score for each cell (Extended Data Fig. 1g). First, if a cell had n fields in an environment that had k corners (n ≤ k), the corner score for that cell was defined as:

$${{\rm{cornerscore}}}_{{\rm{cell}}}=\frac{\sum _{n}{{\rm{cornerscore}}}_{{\rm{field}}}}{k},\left(n\le k\right){\rm{;}}$$

Second, if a cell had more fields than the number of environmental corners (n > k), the corner score for that cell was defined as the sum of the top kth corner scores minus the sum of the absolute values of the corner scores for the extra fields minus one, and divided by k. Namely,

$${{\rm{cornerscore}}}_{{\rm{cell}}}=\frac{\sum _{{\rm{top}}(n,k)}{{\rm{cornerscore}}}_{{\rm{field}}}-\sum _{{\rm{extra}}}{\rm{| }}{{\rm{cornerscore}}}_{{\rm{field}}}-1{\rm{| }}}{k},(n > k)$$

where top(n,k) indicates the fields (also termed ‘major fields’) that have the top kth cornerscorefield out of the n fields, and ‘extra’ refers to the corner scores for the remaining fields (Extended Data Fig. 1g). In this case, the absolute values of the corner scores for the extra fields were used to penalize the final corner score for the cell, so that the score decreased if the cell had too many fields. The penalty for a given extra field ranged from 0 to 2, with 0 for the field at the corner and 2 for the field at the centre. As a result, as the extra field moves away from a corner, the penalty for the overall corner score gradually increases. Note, among all the corner cells identified in the triangle, square and hexagon environments, only 7.8 ± 0.5% (mean ± s.e.m.; n = 9 mice) of them were classified under this situation.

Final definition of corner cells

To classify a corner cell, the timing of calcium spikes for each neuron was circularly shuffled 1,000 times. For each shuffle, spike times were shifted randomly by 5–95% of the total data length, rate maps were regenerated and the corner score for each cell was recalculated. Note, for the recalculation of corner scores for the shuffled rate maps, we did not use the aforementioned penalization process. This is because shuffled rate maps often exhibited a greater number of fields than the number of corners, and thus applying the penalization lowers the 95th percentile score of the shuffled distribution (that is, more neurons would be classified as corner cells). Thus, not using this penalization process in calculating shuffled corner scores kept the 95th percentile of the shuffled distribution as high as possible for each cell to ensure a stringent selection criteria for corner cells (Extended Data Fig. 2a–d). Alternatively, we also attempted to generate the null distribution by shuffling the locations of place fields directly on the original rate map. Although the two methods gave similar results in terms of characterizing corner cells, the latter approach tended to misclassify neurons with few place fields as a corner cell (for example, a neuron has only one field and the field is in the corner). Therefore, we used the former shuffling method to generate the null distribution. Finally, we defined a corner cell as a cell: (1) whose corner score passed the 95th percentile of the shuffled score (Extended Data Fig. 1h,i), (2) whose distance between any two fields (major fields, if the number of fields is greater than the number of corners) was greater than half the distance between the corner and centroid of the environment (Extended Data Fig. 1j) and (3) whose within-session (two halves) stability was higher than 0.3 (Extended Data Fig. 1k), as determined by the 95th percentile of the random within-session stability distribution using shuffled spikes.

Identification of convex corner cells

To identify convex corner cells, we used similar methods as described above for the concave corner cells, with a minor modification. Namely, after the detection of the field locations on a rate map, we applied a polygon mask to the map using the locations of convex corners as vertices. This polygon mask was generated using the build-in function poly2mask in MATLAB. We then considered only the extracted polygon region for calculating corner scores and corresponding shuffles. The reason for using the polygon mask is to avoid nonlinearity in corner score calculation in the convex environment, in particular, when the distance between the location of a field (for example, a field at a concave corner in the convex-1 environment) and the environment centre is greater than the distance between the centre and the convex corner.

Measuring the peak spike rate at corners

To measure the peak spike rate at each corner of an environment, we first identified the area near the corner using a 2D convolution between two matrices, M and V. M is the same size as the rate map, containing all zero elements except for the corner bin, which is set to one. V is a square matrix containing elements of ones and can be variable in size. For our analysis, we used a 12 × 12 matrix V, which isolated a corresponding corner region equal to approximately 10 cm around the corner. We then took the maximum spike rate in the region as the peak spike rate at the corner. For some specific analyses, due to the unique position or geometry of the region of interest (for example, the inserted discrete corner and objects), we decreased the size of the matrix V to obtain a more restricted region of interest for measurement. Specifically, we measured approximately 5 cm around the discrete corner (Fig. 3), approximately 5 cm around the vertices and faces of the triangular object (Extended Data Fig. 7) and approximately 5 cm outside of the cylinders (Fig. 5). To ensure the robustness of our findings, we tried various sizes of the 2D convolution in our analyses, and found that the results were largely consistent with those presented in the manuscript.

Corrections of spike rates on the rate map

When comparing spike rates across different corners, it is important to consider the potential impact of the animal’s occupancy and movement patterns on the measurements (Extended Data Fig. 4a–f). To account for any measurements that might have been associated with the animal’s behaviour, we generated a simulated rate map using a simulated neuron that fired along the animal’s trajectory using the animal’s measured speed at the overall mean spike rate observed across all neurons of a given mouse (Extended Data Fig. 4c). We then used the raw rate map divided by the simulated rate map to obtain the corrected rate map (Extended Data Fig. 4e). This method ensured that behaviour-related factors were present in both the raw and simulated rate maps, and therefore were removed from the corrected rate map (Extended Data Fig. 4a–f).

Measuring paired-wise anatomical distances

To measure the pairwise anatomical distances between neurons, we calculated the Euclidian distance between the centroid locations of each neuron pair under the imaging window for each mouse. We then quantified the average intragroup and intergroup distances for each neuron based on its group identity (for example, concave versus convex corner cells). The final result for each group was averaged across all the neurons. We hypothesized that if functionally defined neuronal groups were anatomically clustered, the intergroup distance would be greater than the intragroup distance.

Boundary vector cell analyses

Rate maps of all the neurons were generated by dividing the open arena into 1.6 cm × 1.6 cm bins and calculating the spike rate in each bin. The maps were smoothed using a 2D convolution with a Gaussian filter that had a standard deviation of 2. To detect boundary vector cells (BVCs), we used a method based on border scores, which we calculated as described previously29,56:

$${\rm{borderscore}}=\frac{{\rm{CM}}-{\rm{DM}}}{{\rm{CM}}+{\rm{DM}}}$$

where CM is the proportion of high firing-rate bins located along one of the walls and DM is the normalized mean product of the firing rate and distance of a high firing-rate bin to the nearest wall. We identified BVCs as cells with a border score above 0.6 and whose largest field covered more than 70% of the nearest wall and whose within-session stability was higher than 0.3. Additionally, BVCs needed to have significant spatial information (that is, as in place cells, described below). Of note, our conclusion regarding BVCs and corner cells remained the same when we varied the wall coverage from 50% to 90% for classifying BVCs.

Place cell analyses

Spatial information and identification of place cells

To quantify the information content of a given neuron’s activity, we calculated spatial information scores in bits per spike (that is, calcium spike) for each neuron according to the following formula57,

$${\rm{bits}}\,{\rm{per}}\,{\rm{spike}}=\mathop{\sum }\limits_{i=1}^{n}{P}_{i}\frac{{\lambda }_{i}}{\lambda }{\log }_{2}\frac{{\lambda }_{i}}{\lambda },$$

where Pi is the probability of the mouse occupying the ith bin for the neuron, λi is the neuron’s unsmoothed event rate in the ith bin, while λ is the mean rate of the neuron across the entire session. Bins with total occupancy time of less than 0.1 s were excluded from the calculation. To identify place cells, the timing of calcium spikes for each neuron was circularly shuffled 1,000 times and spatial information (bits per spike) recalculated for each shuffle. This generated a distribution of shuffled information scores for each individual neuron. The value at the 95th percentile of each shuffled distribution was used as the threshold for classifying a given neuron as a place cell, and we excluded cells with an overall mean spike rate less than the 5th percentile of the mean spike rate distribution (that is, approximately 0.1 Hz) of all the neurons in that animal.

Position decoding using a naïve Bayes classifier

We used a naive Bayes classifier to estimate the probability of animal’s location given the activity of all the recorded neurons. The method is described in detail in our previous publication37. In brief, the binarized, deconvolved spike activity from all neurons was binned into non-overlapping time bins of 0.8 s. The M × N spike data matrix, where M is the number of time bins and N is the number of neurons, was then used to train the decoder with an M × 1 vectorized location labels (namely, concatenating each column of position bins vertically). The posterior probability of observing the animal’s position Y given neural activity X could then be inferred from the Bayes rule as:

$$P\left(Y=y| {X}_{1},{X}_{2}\ldots ,{X}_{N}\right)=\frac{P({X}_{1},{X}_{2},\ldots ,{X}_{N}| Y=y)P(Y=y)}{P({X}_{1},{X}_{2},\ldots ,{X}_{N})},$$

where X = (X1, X2, … XN) is the activity of all neurons, y is one of the spatial bins that the animal visited at a given time, and P(Y = y) is the prior probability of the animal being in spatial bin y. We used an empirical prior as it showed slightly better performance than a flat prior. P(X1, X2, …, XN) is the overall firing probability for all neurons, which can be considered as a constant and does not need to be estimated directly. Thus, the relationship can be simplified to:

$$\widehat{y}=\arg \mathop{\max }\limits_{y}\,P(Y=y)\mathop{\prod }\limits_{i=1}^{N}P\left({X}_{i}| Y=y\right),$$

where \(\widehat{y}\) is the animal’s predicted location, based on which spatial bin has the maximum probability across all the spatial bins for a given time. To estimate P(Xi|Y = y), we applied the built-in function fitcnb in MATLAB to fit a multinomial distribution using the bag-of-tokens model with Laplace smoothing.

To reduce occasional erratic jumps in position estimates, we implemented a two-step Bayesian method by introducing a continuity constraint58, which incorporated information regarding the decoded position in the previous time step and the animal’s running speed to calculate the probability of the current location y. The continuity constraint for all the spatial bins Y at time t followed a 2D gaussian distribution centred at position yt−1, which can be written as:

$${\mathscr{N}}({y}_{t-1},{\,\sigma }_{t}^{2})=c\times \exp \left(\frac{-{\parallel {y}_{t-1}-Y\parallel }^{2}}{2{\sigma }_{t}^{2}}\right),$$

$${\sigma }_{t}=a{{\rm{v}}}_{t},$$

where c is a scaling factor and vt is the instantaneous speed of the animal between time t − 1 and t. vt is scaled by \(a\), which is empirically selected as 2.5. The final reconstructed position with two-step Bayesian method can be further written as:

$${\widehat{y}}_{2{\rm{step}}}=\arg \mathop{\max }\limits_{y}\,{\mathscr{N}}({y}_{t-1},{\sigma }_{t}^{2})P(Y=y)\mathop{\prod }\limits_{i=1}^{N}P\left({X}_{i}| Y=y\right).$$

Decoded vectorized positions were then mapped back onto 2D space. The final decoding error was averaged from ten-fold cross-validation. For each fold, the decoding error was calculated as the mean Euclidean distance between the decoded position and the animal’s true position across all time bins.

To test the contribution of corner cells to spatial coding, we first trained the decoder using all neurons and then replaced the neural activity of corner cells with vectors of zeroes from the test data before making predictions. It is important to note that this activity removal procedure was only applied to the data used for predicting locations and not for training, as ablating neurons directly from the training data will result in the model learning to compensate for the missing information59. We performed this analysis using ten-fold cross-validation for each mouse. To compare the performance of the corner cell removed decoder to the full decoder, we first calculated the 2D decoding error map of a session for each condition, and then obtained a map for error ratio by dividing the error map from the corner cell removed decoder by the error map from the full decoder (Extended Data Fig. 2g). We then compared the error ratio at the corners of the environment to the centre of the environment. For quadrant decoding in the square environment (Fig. 1l), we trained and tested the decoder using only the identified corner cells without the two-step constraint using ten-fold-cross-validation. For the shuffled condition, the decoder was trained and tested for 100 times using circularly shuffled calcium spikes over time. The probability in the correct quadrant was compared between the corner cell trained and shuffled decoders. For decoding the geometry of different environments (Extended Data Fig. 5d–f), we concatenated the data (time bin = 400 ms) with neurons tracked from circle, triangle, square and hexagon environments for each animal. The data was then resampled from an 8 cm diameter circular area either in the centre or near the corner/boundary of the environment. The data length was matched between the two areas and the decoding labels for each environment were identical (numerical, 1 for circle, 2 for square, 3 for triangle, 4 for hexagon). Then the decoder was trained and tested for each mouse using 10-fold-cross-validation.

Visualization of low-dimensional neural manifold

We implemented a two-step dimensionality reduction method based on a prior publication42. First, we took the binarized, deconvolved spike activity from all neurons for each session (time bin size = 67 ms) and convolved it with a Gaussian filter with σ = 333 ms. As a result, each column of the matrix represents the smoothed firing rate of each cell over time. Then, we z-scored the smoothed firing rate of each cell. Next, we proceeded with dimensionality reductions on this smoothed and z-scored data matrix (number of time bins × number of neurons). First, to improve robustness to noise, we performed a principal component analysis (PCA) on the data matrix. Next, we selected the top ten principal components from the PCA results to carry out Uniform Manifold Approximation and Projection (UMAP), reducing the ten principal components into a 3D visualization. The parameters for this UMAP were set as follows: min_dist = 0.1, n_neighbors = 100 and n_components = 3. Note that the general structure of the low-dimensional neural manifold remained largely the same when we varied the number of principal components from 5 to 30 and adjusted the parameters for UMAP.

Linear–nonlinear Poisson (LN) model

Calculation of allocentric and egocentric corner bearing

For each time point in the recording session, the allocentric bearing of the animal to the nearest corner (Extended Data Fig. 8b) was calculated using the x, y coordinates of the corners and the animal as follows:

$${{\rm{corner}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{a}}{\rm{l}}{\rm{l}}{\rm{o}}{\rm{c}}{\rm{e}}{\rm{n}}{\rm{t}}{\rm{r}}{\rm{i}}{\rm{c}}}=\arctan \,2({y}_{{\rm{corner}}}-{y}_{{\rm{animal}}},{x}_{{\rm{corner}}}-{x}_{{\rm{animal}}})$$

Similarly, allocentric bearings to the nearest walls or centre of the environment was calculated as:

$${{\rm{wall}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{allocentric}}}=\arctan \,2({y}_{{\rm{wall}}}-{y}_{{\rm{animal}}},{x}_{{\rm{wall}}}-{x}_{{\rm{animal}}})$$

$${{\rm{center}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{allocentric}}}=\arctan \,2({y}_{{\rm{center}}}-{y}_{{\rm{animal}}},{x}_{{\rm{center}}}-{x}_{{\rm{animal}}})$$

We then derived the egocentric corner bearing of the animal (Extended Data Fig. 8a–c) by subtracting the animal’s allocentric head direction from the allocentric corner bearing:

$${{\rm{corner}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{egoocentric}}}={{\rm{corner}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{allocentric}}}-{\rm{head}}\,{\rm{direction}}$$

Note that a corner bearing of 0 degrees indicates that the corner was directly in front of the animal, as illustrated in Extended Data Fig. 8c. Similarly, egocentric bearing to the nearest walls or centre were calculated as follows:

$${{\rm{wall}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{egoocentric}}}={{\rm{wall}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{allocentric}}}-{\rm{head}}\,{\rm{d}}{\rm{i}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}$$

$${{\rm{center}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{egoocentric}}}={{\rm{center}}{\rm{b}}{\rm{e}}{\rm{a}}{\rm{r}}{\rm{i}}{\rm{n}}{\rm{g}}}_{{\rm{allocentric}}}-{\rm{head}}\,{\rm{d}}{\rm{i}}{\rm{r}}{\rm{e}}{\rm{c}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}$$

Implementation of the linear–nonlinear Poisson (LN) model

The LN model is a generalized linear model (GLM) framework which allows unbiased identification of functional cell types encoding multiplexed navigational variables. This framework was described in a previous publication60 and here, we applied the same method to our calcium imaging data in the subiculum. Briefly, for Model 1 in Extended Data Fig. 8, 15 models were built in the LN framework, including position (P), head direction (H), speed (S), egocentric corner bearing (E), position & head direction (PH), position & speed (PS), position & egocentric corner bearing (PE), head direction & speed (HS), head direction & egocentric bearing (HE), speed & egocentric bearing (SE), position & head direction & speed (PHS), position & head direction & egocentric bearing (PHE), position & speed & egocentric bearing (PSE), head direction & speed & egocentric bearing (HSE) and position & head direction & speed & egocentric bearing (PHSE). For each model, the dependence of spiking on the corresponding variable(s) was quantified by estimating the spike rate (rt) of a neuron during time bin t as an exponential function of the sum of variable values (for example, the animal’s position at time bin t, indicated through an ‘animal-state’ vector) projected onto a corresponding set of parameters (Extended Data Fig. 8d). This can be mathematically expressed as:

$${\bf{r}}=\frac{\exp (\sum _{i}{X}_{i}^{T}{{\bf{w}}}_{i})}{{\rm{d}}t}$$

where r is a vector of firing rates for one neuron over T time points, i indexes the variable (i [P, H, S, E]), Xi is the design matrix in which each column is an animal-state vector xi for variable i at one time bin, wi is a column vector of learned parameters that converts animal-state vectors into a firing-rate contribution and dt is the time bin width.

We used the binarized deconvolved spikes as the neuron spiking data with a time bin width equal to 500 ms. The design matrix contained the animal’s behavioural state, in which we binned position into 2 cm2 bins, head direction and egocentric corner bearing into 20-degree bins, and speed into 2 cm s−1 bins. Each vector in the design matrix denotes a binned variable value. All elements of this vector are 0, except for a single element that corresponds to the bin of the current animal-state. To learn the variable parameters wi, we used the built-in fminunc function in MATLAB to maximize the Poisson log-likelihood of the observed spike train (n) given the model spike number (r × dt) and under the prior knowledge that the parameters should be smooth. Model performance for each cell is computed as the increase in Pearson’s correlation (between the predicted and the true firing rates) of the model compared to the 95th percentile of shuffled correlations (true firing rate was circularly shuffled for 500 times). Performance was quantified through ten-fold cross-validation, where each fold is a random selection of 10% of the data. To determine the best fit model for a given neuron, we used a heuristic forward-search method that determines whether adding variables significantly improved model performance (P < 0.05 for a one-sided sign-rank test, n = 10 cross-validation folds).

Using LN models to identify egocentric corner cells

To identify egocentric corner coding in an unbiased manner, we replaced the allocentric position (P) in Model 1 with egocentric corner distance (D, bin size = 2 cm) to facilitate the identification of egocentric corner cells (Model 2, Extended Data Fig. 9a). However, encoding for egocentric corner bearing, particularly in rotationally symmetric environments, could potentially be confounded by other correlated variables, such as egocentric wall bearing (circular correlation with corner bearing = 0.43)27,44 or egocentric centre bearing (circular correlation with corner bearing = −0.73)12. To rule out the possibility that the observed encoding for egocentric corner bearing in Model 2 was actually due to encoding for egocentric wall or centre bearing, we next trained two separate LN models in which egocentric corner bearing and corner distance was replaced by egocentric wall bearing and wall distance (Model 3, Extended Data Fig. 9c), or with egocentric centre bearing and centre distance (Model 4, Extended Data Fig. 9c). As Models 2, 3 and 4 were trained and tested using the same data, we compared the model fitting of neurons with egocentric corner modulation in Model 2 to the fitting of the same neurons in Model 3 and Model 4. Neurons that exhibited a significantly better fit (higher increased correlation, n = 10-fold) in Model 2 compared to Model 3 or 4 were considered as potential neurons encoding egocentric corner bearing. Finally, to rule out the possibility that egocentric corner coding could artifactually result from the conjunction of position and head direction12, we also compared the neurons’ fittings in Model 2 to the position and head direction groups (P, H, PH, PHS) in Model 1 (Extended Data Fig. 8). Neurons that met these criteria were considered as significantly encoding corners in an egocentric reference frame.

To further disentangle the correlations among egocentric bearing variables in rectilinear environments, we repeated the same analysis (as described above) in the right triangle environment. In the right triangle, the circular correlation between corner and wall bearings decreased to 0.09, and the correlation between corner and centre bearings shifted to −0.38. Correlations between egocentric distances also shifted by 0.2 to 0.4 towards zero. Thus, in the right triangle environment, tuning between corner versus wall/centre becomes sufficiently distinct.

Histology

After the imaging experiments were concluded, mice were deeply anesthetized with isoflurane and transcardially perfused with 10 ml of phosphate-buffered saline (PBS), followed by 30 ml of 4% paraformaldehyde-containing phosphate buffer. The brains were removed and left in 4% paraformaldehyde overnight. The next day, samples were transferred to 30% sucrose in PBS and stored in 4°C. At least 24 h later, the brains were sectioned coronally into 30-µm-thick samples using a microtome (Leica SM2010R, Germany). All sections were counterstained with 10 μM DAPI, mounted and cover-slipped with antifade mounting media (Vectashield). Images were acquired by an automated fluorescent slide scanner (Olympus VS120-S6 slide scanner, Japan) under ×10 magnification.

Data inclusion criteria and statistical analysis

After a certain period postsurgery, the imaging quality began to decline in some animals, and this thus led to slight variations in the number of mice used in each set of experiments, ranging from 7 to 10. We evaluated the imaging quality for each mouse before executing each set of experiments. No mice were excluded from the analyses as long as the experiments were executed. For experiments with two identical sessions for a given condition (for example, Figs. 1 and 2), sessions with less than 3 identified corner cells were excluded to minimize measurement noise in spike rates. This criterion only resulted in the exclusion of one session from one mouse in Fig. 2e.

Analyses and statistical tests were performed using MATLAB (2020a) and GraphPad Prism 9. Data are presented as mean ± s.e.m. For normality checks, different test methods (D’Agostino and Pearson, Anderson–Darling, Shapiro–Wilk and Kolmogorov–Smirnov) indicated only a portion of the data in our statistical analyses followed a Gaussian distribution. Thus, a two-tailed Wilcoxon signed-rank test was used for two-group comparisons throughout the study. We also validated that conducting statistical analyses with a two-tailed paired t-test yielded consistent results and did not alter any conclusions. For statistical comparisons across more than two groups, repeated measures analysis of variance (ANOVA) was used before pairwise comparisons. All statistical tests were conducted on a per-mouse basis. In cases where an experiment involved two sessions, the data were averaged across these sessions, as indicated in the corresponding text or figure legend. For example, in Fig. 1g, the proportion of corner cells was determined by averaging the proportions of corner cells in session 1 (a single number) and session 2 (a single number). Similarly, in Fig. 1l, the decoding accuracy for each mouse was averaged using the mean decoding accuracy of session 1 (a single number) and session 2 (a single number). In all experiments, the level of statistical significance was defined as P ≤ 0.05.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *