Virtual Reality system for freely-moving rodents

Spatial navigation, active sensing, and most cognitive functions rely on a tight link between motor output and sensory input. Virtual reality (VR) systems simulate the sensorimotor loop, allowing flexible manipulation of enriched sensory input. Conventional rodent VR systems provide 3D visual cues linked to restrained locomotion on a treadmill, leading to a mismatch between visual and most other sensory inputs, sensory-motor conflicts, as well as restricted naturalistic behavior. To rectify these limitations, we developed a VR system (ratCAVE) that provides realistic and low-latency visual feedback directly to head movements of completely unrestrained rodents. Immersed in this VR system, rats displayed naturalistic behavior by spontaneously interacting with and hugging virtual walls, exploring virtual objects, and avoiding virtual cliffs. We further illustrate the effect of ratCAVE-VR manipulation on hippocampal place fields. The newly-developed methodology enables a wide range of experiments involving flexible manipulation of visual feedback in freely-moving behaving animals.


INTRODUCTION
Movement is a fundamental element in the action-perception loop that is critical for most cognitive functions, such as decision-making, memory and spatial navigation.
Internally-driven locomotor, head and sensor movements, an exploratory repertoire of a naturally-behaving animal, allow it to actively sample sensory information from the outside world for its optimal detection and encoding, as well as guidance of the behavior [1][2][3][4][5] . Recognition that the closed-loop link between internal dynamics, motor output and sensory processing gives rise to predictive coding, attention and flexible motor control [6][7][8] is encouraging the use of a new experimental paradigm in sensory and cognitive neuroscience: closed-loop sensory stimulation. Traditional open-loop experimental paradigms involving head-fixation of the animal, useful for performing sensitive measurements of functional brain activity, are being replaced by experimental setups that partially close the loop between action and sensation while still retaining precise control of sensory inputs [9][10][11][12][13] .
Virtual reality (VR) systems close the loop between locomotion and vision. Many rodent laboratories use head-or body-restrained VR (rVR) setups to simulate locomotion through a 3D virtual environment (VE) via running on a treadmill 10,11 . Spatial coding research has especially benefited from such systems; VR researchers have taken advantage of the flexibility of a VE by implementing arbitrarily-large environmental exploration paradigms utilizing dynamic environments [14][15][16] and manipulating visuomotor gain 17 . Additionally, many researchers take advantage of the rodent's fixed head by performing optical and intracellular recordings during locomotion through virtual space, a normally-challenging task in freely-moving animals 10,18,19 .
However, locomotion on a treadmill alone may not be enough for performing closed-loop research; behavioral and physiological differences between rVR and real-world navigation illustrate the detrimental effect of sensorimotor loop disruption and the importance of increasing motor affordances. While head-fixed rodents in rVR experiments are limited to navigating linear tracks 10,[17][18][19][20] , likely due to an impoverished sensory-motor loop (Schmidt-Hieber, personal communication), rodents can navigate a two-dimensional VE if only their bodies are restrained and their heads left free to move 14,16,[21][22][23] . If rats are further allowed to rotate while running on a spherical treadmill in rVR experiments, 2D hippocampal place cell representation of the VE is comparable to that in real-world navigation 23 ; however, this effect is lost if the rodent's body rotation range is limited 21,24 .
Despite the great utility of rVR for studies of spatial navigation, animal restraint still poses unresolved challenges. First, restrained animals exhibit constrained or limited behavioral patterns within 2D space, which affects the way they actively sample the 3D environment. Second, locomotion-driven visual input is in conflict with locomotion-independent, head-bound idiothetic, olfactory, tactile and auditory inputs. Third, proprioceptive and vestibular inputs in rVR setups are diminished and unnatural, making them potential causes of the observed reduction in frequency-and speed-correlates of theta oscillatory dynamics, compared to rodents allowed to freely navigate the real world 23,24 . Lastly, animals require long and complex training and habituation to rVR setups 23,25 .
These challenges are resolved if visual feedback in VR is based on head motion in 3D space in freely-moving subjects, giving rise to a coherent visual, idiothetic and external multisensory input, an unperturbed action-perception loop, and a full repertoire of rodent behavior, while still preserving the precise control of visual stimuli in VR setups 26 . One such freely-moving VR (fmVR) system was introduced for human subjects as the Cave Automatic Virtual Environment (CAVE) 27 . A CAVE allows observers to freely move in space and view a 3D VE on the projection surfaces surrounding them. To date, CAVE-like VR systems for flies 28,29 and fish 30 couple animal 3D motion to 2D contrast patterns on the projected onto cylindrical surfaces, though a system for arthopods with more realistic visual feedback was reported 31 . Implementation of the CAVE system in rodents, a model mammalian system where complex interrogation and manipulation of the nervous system can be combined with cognitive behavior, would open new dimensions in experimental neuroscience. Designing an immersive fmVR in quickly-moving animals is challenging, however, as it would require very-low-latency visual feedback to avoid introducing new conflicts in the sensorimotor loop 32 and computationally-intensive graphical operations to produce a visually-rich VE. An urgent need for and benefits of the development of a next-generation immersive fmVR were called for in a recent review 11 .
To provide an immersive virtual environment for untrained freely-moving rodents and allow them to explore and interact with the virtual environment in a natural manner, we developed a new CAVE fmVR system (ratCAVE) that produces minimal intersensory conflict during self-motion using fast head-tracking and high display frame rates, as well as enriched visual 3D cues of the virtual scene. We demonstrate the naturalistic interaction of rats with VEs in our fmVR system in several behavioral tasks. We further show a use case of fmVR not possible with rVR systems: to study the multisensory nature of hippocampal spatial representation. This highly-immersive fmVR system can be a powerful tool for a broad range of neuroscience disciplines.

ratCAVE : VR system for freely moving rodents
We implemented a CAVE system where a VE projection on the surface of the arena was closed-loop coupled with the real-time tracking of the head of the animal. In this setup, animals could move freely in a rectangular arena similar to that used for conventional open-field experiments, but the white-painted arena served as a projection surface. We used an array of 12 high-speed cameras (240-360 fps, NaturalPoint Inc.) to track the 3D position of the rodent's head via a rigid array of retro-reflective spheres attached to a head-mounted 3D-printed skeleton (Fig. 1c,d). This tracking system enabled us to update the rodent's head position with very high spatial (<0.1 mm) and temporal (<2.7 msec) resolution. The VE, created using opensource 3D modeling software (Blender 3D), was rendered each frame in a full 360degree arc about the rodent's head and mapped onto a 3D computer model of the arena using custom Python and OpenGL packages ( Supplementary Fig.3, Online Methods), warped in real-time to generate a fully-interactive, geometrically-accurate 3D scene (Fig. 1b). The core cube-mapping algorithm used to perform the mapping of the VE onto the projection surface was identical to those described in rodent rVR setups ( Supplementary Fig. 2a-c) 23 , but VE projection onto the surface of the arena is continuously updated according to the changing 3D position of the rodent's head (Fig  1b), resulting in perception of a 3D VE that is stable in the real-world frame of reference that the animal is freely moving about (Fig 1c,d). The resulting image was front-projected onto the floor and slanted walls of the arena from a ceiling-mounted high-speed (240 fps) video projector ( Supplementary Fig. 4). Because the presented virtual motion parallax cue automatically takes into account the rodent's distance from the arena's walls, virtual objects can be made to appear both inside and outside the arena's boundaries (Supplementary Movie 1).

Flexible design, calibration and mobility of the VR arena
Automatic arena-projector calibration ensured that the image was correctly projected onto the arena's surface. Calibration was realized via a point cloud-modeling procedure by projecting a random dot pattern onto the arena's surface, measuring the 3D position of each dot via a 3D tracking system, and fitting a 3D digital model of the arena to this point cloud data (Fig. 1a). This scanning process provides the flexibility to layer a VE over an arbitrary arena surface, including smooth objects inside the arena. The position of the arena with respect to the projector was continuously tracked using a set of retro-reflective spheres mounted on the arena itself, allowing the arena to be arbitrary translated and rotated during an experimental session while preserving the correct projection.
Low latency motor-visual feedback of the ratCAVE system Motion-to-photon (end-to-end) latency in our system cumulatively included input lag of the tracking system, the processing lag of the tracking and ratCAVE software, as well as "display lag", the time it takes for the rendered image to be projected. Selecting fast tracking and display hardware and optimized software allowed us to achieve a motion-to-photon latency approaching 15 msec ( Supplementary Fig.1 a-c). This latency is significantly lower that that of any fmVR/CAVE systems reported to date that we are aware of and additionally supplies a smoother motion stimulus than in those with lower-framerate displays (typically 60 Hz) 31,33 . Since rats rarely reached speeds of 50 cm/s during spontaneous exploration of the arena (Supplementary Fig.  1d), we expect that they were experiencing minimal, if any, latency-related crosssensory conflicts in our system.

Visual cues enhancing VR immersion
A large number of conflicting visual cues can exist in CAVE systems that can distract from VR immersion, which we've taken additional steps to decrease. First, we implemented online radiosity compensation, which equalizes the image brightness across the entire arena to decrease the visual perception of the arena itself. Second, we implemented antialiasing to decrease the perception of the individual pixels. Third, the location of the virtual light source was programmed to match the position of the projector, giving the projector the impression of simply illuminating the virtual objects, rather than creating them. Finally, to provide a richer visual scene and additional visual depth cues to the observer 34 , we implemented both diffuse and "glossy" specular reflections off the virtual objects' surfaces using the Phong reflection model, as well as casting shadows on themselves and other objects. Additions of these visual features gave rise to a smooth and perceptually realistic VE ( Supplementary Fig. 2d).

Testing spontaneous behavior of rats in the ratCAVE
We designed a set of behavioral experiments that were aimed to explore and evaluate the degree of rats' immersion and interaction with the VE provided by ratCAVE. In each experiment, behavior of freely-moving rats (n=3) was tested in distinct VEs that were designed to evaluate specific aspects of behavioral interaction with purely virtual elements: virtual cliff avoidance, virtual object exploration, and interaction with the virtual wall. These tasks were specifically chosen to require no pre-training or reinforcement and rely on spontaneous behavior of rodents. Benefiting from high spatial resolution tracking of position and orientation of the rats' head, each rat's natural behavior during each task was classified into walking, immobility and rearing based on speed and head-height features ( Supplementary Fig. 6a). The three experiments were performed repeatedly across animals over several days.

Virtual cliff avoidance experiment
Visual cliff avoidance paradigm is a classical test of visual depth perception and relies on innate behavior of the animals 35 . We designed a virtual version of this task that tests if rats avoid jumping from the virtual cliff emulated in the VE. In each 30second session, rats were placed onto a board suspended above the arena's floor, bisecting the arena into randomly-assigned safe and cliff sides, in which the virtual floor was at and 1.5 meters below the floor level, respectively ( Fig. 2a; Supplementary Movie 3). We observed several well-defined behaviors in this task: wall-supported rearing, visual exploration of the ledges (head dipping), and the jump off the ledge towards one of the virtual floors (Fig. 2b, Supplementary Fig. 5a-c). Interestingly, rats had preference to jump to the safe side if they made their decision after short (~<20 sec exploration), but decreased this preference to chance level if longer exploration times were included (Fig. 2c, Supplementary Fig. 5d). When excluding outlier sessions (see Online Methods), we found that rats showed a preference toward the safe side regardless of the position of the virtual cliff (Fig. 2d). Thus, when exposed to the VR for a limited time rats tend to avoid it similar to real cliff avoidance paradigms.

Interaction with virtual walls
Virtual boundaries are the main elements of the VE that inform animals about topology of the virtual space 36 . In rVR systems, rats are traditionally operantly conditioned to respect the boundaries by freezing the VE upon collision of the animal's virtual trajectory with the wall 16,21,37 . In order to investigate how naive rats spontaneously interact with virtual boundaries, we introduced a virtual wall in the middle of the arena (Fig. 3a,b). During 10-minute sessions rats were let to explore the environment. Rats displayed noticeable change of their behavior in the vicinity of the walls, as demonstrated by increased occupancy and rearing events around the wall (Supplementary Fig. 6b; Supplementary Movie 4). Interestingly, orientations of the locomotion trajectories in the vicinity of the virtual wall concentrated around perpendicular and parallel orientations to the wall ( Supplementary Fig. 7b-c), indicating that the rat moved either along or towards/away to the virtual wall. Clustering of parallel orientation of trajectories near the virtual wall was similar to that in near the real wall, but was not present in the matching location in the control sessions with an empty arena ( Supplementary Fig. 7). This behavior is consistent with thigmotaxis along both virtual and real walls. We further tested whether rats treated the virtual wall as an obstacle when approaching it. Locomotion trajectories approaching the virtual wall were more likely to turn away from (a "deflection" trajectory) than to cross through the virtual wall to the other side of the arena (a "crossing" trajectory, Fig. 3c), compared to the same arena locations in control sessions with empty arena, but not in the direction parallel to the virtual wall under either condition (Χ 2 =48.48, n=797 trajectories, p < .001, Fig. 3c-d). Thus, rats' interactive behavior towards the virtual wall is consistent with them responding to it as a wall.

Exploration of virtual objects
Spontaneous exploration of the objects is the cornerstone for multitude of behavioral paradigms aimed to study perception and memory 38 . Real objects have multimodal features and affordances, but require careful and laborious handling for repeated presentation and feature manipulation. 3D virtual objects could be arbitrarily designed, manipulated and presented to an animal automatically. While rodents can perceive 3D shapes 39 and navigate towards reward locations marked by virtual objects in rVR 22,23 , naturalistic exploration of virtual objects cannot be properly tested with any existing methods. In series of test sessions we investigated how rats spontaneously interact with the virtual 3D objects ( Supplementary Fig. 8a) pseudorandomly positioned inside the arena (Fig 4a; Supplementary Movie 5). Rats spent more time in the vicinity (<15 cm) of the virtual objects, especially in the center of the arena, with their trajectories precisely approaching the object, as compared to sham locations ( Fig. 4a-b). We further quantified how rats interacted with the virtual objects on their direct approach trajectories (<10cm from the virtual object). Similar to the interactions with the virtual walls, rats' trajectories often "deflected" from the virtual objects, reflecting that rats changed their direction of running (<90deg arc) after reaching the virtual object's boundaries ( Fig. 4c-d, Supplementary Fig. 8c). Deflective nature of interaction with virtual objects was qualitatively reminiscent to that with real objects (Fig. 4c), and while less frequent, deflections were occurring significantly more often around objects than in sham locations (Fig. 4d). Rats occasionally displayed rearing and head-scanning behavior in the vicinity of the virtual objects (data not shown). Interestingly, in a fraction of sessions in which the exploration of an empty arena followed the object trial, rats showed a tendency to spend more time in the location of previous encounter with the virtual objects ( Supplementary Fig. 8e).

Effect of virtual environment on hippocampal spatial map.
While, as we've shown above, animals immersed into the VE interact with it less reliably than with real environment and thus behavioral readout is only partially reflecting animal's perception of the VE, internal hippocampal representation of the virtual space could provide an insight into animal's perception of the VE 23 . Hippocampal spatial representation is believed to be anchored to multiple frames of reference, which are concurrently controlled by visual geometrical features of the boundaries and landmarks, other external sensory and idiothetic inputs, but due to physical limitation of the real environment, dissociation of the contribution of these different reference frames is difficult, and was so far mainly limited to rotations around a symmetry axis 40 . Here we illustrate an application of the ratCAVE to study complete dissociation of visual and all other multisensory systems on hippocampal spatial representation by linearly translating visual boundaries with respect to the physical environment. In the pilot experiment we recorded population of pyramidal cells in CA1/2 regions of the hippocampus (166 and 154 from two days analyzed) in a rat spontaneously exploring the arena through series of sessions in which VE was either aligned or laterally shifted by 20 cm with respect to the physical boundaries of the arena (Normal vs Shifted, Fig. 5a). Similar to the virtual wall interaction experiment, the rat interacted with the virtual boundary that appeared inside the arena in the Shifted condition at least during the first Shift session. Interestingly, population of place cells (n=20, see Online Methods for selection criteria) remapped their place fields within the arena between Normal and Shifted sessions in the direction of the VE shift ( Fig. 5b-c). The effect decreased over consecutive alternating sessions and following multiple exposures to the shifted VE (3 days later) place cells showed no remapping between Shifted and Normal conditions (Fig. 5d-e). We tested if any visual information associated with VE boundaries is contributing to the stabilized spatial map by immersing the rat into the VE that was unrelated to and expanded beyond the physical boundaries of the arena. This VE as well had no effect on the place field position ( Fig. 5d-e, bottom). Thus ratCAVE is sufficiently immersive to enable visual input control of hippocampal spatial representation, but progressive exposure to the conflict between visual and other multisensory inputs enabled by ratCAVE can result in complete independence of the hippocampal spatial representation from the visual input 41,42 .

DISCUSSION
We presented a ratCAVE system for freely-moving rodents that builds on and extends previous developments of fmVR systems in arthropods and fish 28-31 to provide a highperformance general cognitive science VR research platform by implementing a combination of methods that provide realistic visual environments, low-latency and high-precision closed loop feedback to animals' head, and flexibility of the shape and mobility of the arena. Using more complex lighting models, including diffuse and specular reflections and self-shadowing, provides new visuo-spatial cues for virtual environments and increases immersion 34,43 . In humans, sensory conflicts resulting from out-of-phase feedback to rapid head motion arise when motion-to-photon latency of the VR system is larger than ca. 50 msec, resulting in decreased performance in spatial navigation, spatial perception, and sense of self-motion in the VE 32 ; to counter this effect, we've implemented a low-latency visual update loop (240 fps, 15msec "motion-to-photon" lag) to decrease mismatch between vestibular, proprioceptive, and visual self-motion cues, essential for proper self-motion detection and functioning of the head-direction system 44,45 .
There are pressing improvements needed to further increase immersion in VR systems used in neuroscience research. While rVR immersion requires animals to ignore lacking or mismatching sensory inputs, immersion in fmVR is associated with the minimal conflict between visual and other senses. However, both rVR and fmVR systems suffer from the cross-sensory conflict upon collision of animal's trajectory with the virtual boundary and can break immersion. In rVR setups, the solution has been to simply stop visual update while still allowing rodent locomotion, creating a locomotion-visual mismatch upon impact 16,23 . In fmVR, a similar mismatch occurs when the virtual and real surfaces are not matched and are directly sampled by the animal. Such situations require a careful selection of virtual environment, arena design and method to match the research questions at hand. A few improvements can be considered in the ratCAVE. First, VE objects and boundaries can be made inaccessible to the animal by projecting them outside arena walls or across the gap. Second, ratCAVE calibration procedure allows for projecting virtual objects on smooth shapes inside the arena, thus aligning them with real countrerparts, enhancing VR immersion via all three avenues: naturalistic interaction (via touch and smell), increased cue salience, and reducing cross-sensory mismatch upon virtual object contact. Third, electrical or optogenetic stimulation of olfactory or somatosensory system 46,47 can be used to provide congruent multisensory feedback. Similarly, use of visuo-acoustic VR can be provide more cohesive VE 22, 48 . In addition to motiondependent monocular depth cues, static binocular depth cues based on stereoscopy are also important for forming an accurate 3D space percept 43 , a point currently ignored in rodent VR studies. Thanks to precise head-based projection, the ratCAVE system can be extended to generate stereo VE via implementation of head-mounted shutter glasses to provide alternating images to the left and right eyes of the exploring rodent. Many of these improvements can be added onto existing rVR and fmVR systems to increase VR immersion in those setups. Further integration and cross-insemination of open-source fmVR and rVR developments in diverse animal models will enable a broad spectrum of neuroscientists to use these systems.
Freely-moving virtual reality represents an improvement in VE immersion over rVR, considered as an enhancement of naturalistic interaction mechanisms with the virtual environment, an increased salience of sensory cues associated with the virtual environment, and a minimization of cross-sensory conflict. Naturalistic interaction with the virtual environment is enhanced in fmVR by simply allowing the full range of movement in an unmodified space, without training or postural alteration, while in rVR, locomotion and virtual object interaction must be simulated via running on a spherical treadmill. Self-motion cues through the virtual environment are enhanced in fmVR by providing higher-frequency and shorter-latency feedback to head motions in the virtual environment alongside the lower-frequency locomotion behaviors, while rVR only provides locomotion feedback. In contrast to rVR that assumes a stationary head in the virtual projection, fmVR system minimizes cross-sensory conflict by providing feedback to head motions, as well as by matching changes in olfactory, tactile, and auditory real-world inputs to self-motion in the virtual world. Finally, fmVR systems do not require operant training and habituation procedures used in rVR systems.
We demonstrated that a ratCAVE VR system for freely moving animals can be successfully applied to a number of behavioral paradigms not possible with conventional rVR systems. Untrained rats freely behaved and spontaneously interacted with virtual environment by approaching, exploring and leaving virtual objects and walls, displaying thigmotaxis along virtual walls and avoiding a virtual cliff. We further used ratCAVE system to illustrate how contribution of the virtual visual input to hippocampal spatial representation can be strong upon first exposure to VE mismatched with the physical world, but becomes negligible after repeated exposure of the rat to cross-sensory conflict. These experiments and design features of ratCAVE described above pave the way to a large body of future applications.
First, high spatio-temporal resolution of 3D tracking of the rodent's head, which can be extended to include the full body, enables quantitative analysis of the natural behaviors of the rodent during VE exploration, which significantly extends level of analysis possible using two-dimensional locomotion information provided by conventional tracking in 2D space or the treadmill measurement in rVR. Second, ratCAVE's "trackable" arena also enables vestibular perturbations during VR experiments via arena movement, enabling studies on vestibular system function and visuo-vestibular binding in behaving rodents. Third, fmVR's ability to incorporate a three-dimensional element into operant conditioning tasks increases the range of motor affordances of digitally-rendered learning stimuli, which have their own benefits of flexibility and timing control 49 . Integrating these improvements into VR setups will enable new methods in research areas such as learning and memory, perceptual decision-making, and 3D-rotation and object perception 39 . Fourth, the automated nature of head tracking allowing for online behavior analysis, operant conditioning, and fmVR enable high-throughput and automatic behavioral testing in a colony of animals 50 across a large variety of tasks, such as perceptual, incidental and motor learning, spatial memory paradigms, to name a few. Importantly, use of automated fmVR behavioral paradigms allows their standardization, reproducibility of results independent of experimenters or setup. Finally, combined with neural recording and manipulation ratCAVE enables the detailed investigation of the mechanisms of spatial coding. Manipulation of the arena boundaries provides a powerful tool to study for multisensory nature, remapping and attractor properties of the spatial representation 51 .
Low latency, unmatched by any other system for freely moving subjects, and rich visual features make ratCAVE appealing for use in human subjects. Translation of experimental paradigms and physiological validation of psychophysical experiments from humans to animals and back could enable validation and further development of diagnostic and rehabilitation procedures for the vestibular or neurodegenerative disorders in animal models 52,53 . ratCAVE opens new ways to study sensory-motor systems in their natural dynamics while having flexibility in manipulating the sensory feedback not possible in real life.

COMPETING INTERESTS STATEMENT
The authors declare that they have no competing financial interests.

ratCAVE VR system
Hardware setup. Our setup consisted of a rectangular arena with dimensions 115cm x 65cm (L, W) and walls 40cm high, angled at 70 degrees to increase the projected image's surface area and brightness. A set of 12 cameras (OptiTrack, NaturalPoint Inc. U.S) was used to record the 3D position of retro-reflective spheres, six Prime 17W (360 fps) and six Prime 13W (240 fps). A projector with 240 fps frame rate (VPixx Technologies Inc., Saint-Bruno, Canada) was mounted to the ceiling. An optically-flat aluminum-foil projection mirror (100cm x 75 cm, Screen-Tech), slanted 45 degrees, was suspended from the ceiling on an adjustable frame for accurately fitting the projected image onto the whole surface of the arena. This setup was installed inside an isolating acoustic chamber ( Supplementary Fig. 4).
Software. The ratCAVE VR system depends on many pieces of software to work; interactions between each software component are diagrammed in Supplementary   Figure 3. Virtual environments are modeled and exported to file in a 3D modeling program, Blender 3D ( Supplementary Fig. 3a). Coregistration of the arena and projector with the tracking coordinate system is performed via a custom Python command-line program package called "ratcave_calibration", which uses a custom Python API called "MotivePy" to access and controlling our Optitrack camera array while using a custom Python 3D graphics utility package called "Fruitloop" to render the point cloud from the projector ( Supplementary Fig. 3, "Grey Zone"). Fruitloop provides a user-friendly interface for modern OpenGL rendering techniques, and its "Get Data, Update Camera, Render VE" event loop forms the core engine of a ratCAVE virtual reality session. Cubemapping, lighting, and antialiasing are done via OpenGL FrameBuffer objects and shader scripts supplied with Fruitloop. VR Experiment scripts are written in Python, using a custom network client called "NatNetClient" to obtain Optitrack camera data in real-time and Fruitloop to render the virtual scene ( Supplementary Fig. 3, "Blue Zone"). Because all software used in the ratCAVE VR setup is comprised of loosely-connected specialized parts, the software developed by the lab is generalizable to a variety of different setups, enabling other labs to substitute like-components to build a VR setup that matches their hardware.  Fig. 3, "3D Tracking Software"). Rodent head position was then logged for offline analysis and sent over the network to the VR system's experiment script via a custom python package (NatNetClient) for visual stimulus update ( Supplementary Fig. 3, "Python Optitrack Client"). The ratCAVE VR engine Fruitloop receives the current position of the rat's head from NatNetClient, updating the virtual scene from the rat's perspective, generates the projected image using a cube-mapping algorithm (Supplementary Fig.   2a-c), performs per-fragment lighting calculations ( Supplementary Fig. 2d), and antialiases the resultant video output via custom OpenGL shaders ( Supplementary   Fig. 3). The resultant image is then projected onto the arena via the video projector.
Latency measurement. Motion-to-photon latency was explicitly measured using the following setup 54 . A reference point, representing a VR observer, formed by a set of three retro-reflecting markers and a small LED, were attached to a bar that was rotated in the horizontal plain around a fixed point inside an arena by an AC motor and was tracked as described above. The VR system was programmed to generate a white spot that was offset in the horizontal plain from the reference point that would follow a reference marker. VR spot was thus rotating in the horizontal plain following the rotation of the reference LED point. Both LED and VR spots were imaged using high-speed-camera (Prime, Photometrics) at 250 Hz. The image stack was processed to detect both spots (Supplementary Fig. 1a) and temporal trajectories of X and Y coordinates of both reference and VR spots, which were analyzed to detect temporal offset between them using cross-correlation function ( Supplementary Fig. 1b). The angular speed of rotation was varied between trials, and the resulting linear speed (tangential) was computed and used for latency-speed analysis ( Supplementary Fig.   1c).

Animal experiments methods
All procedures complied with the European Communities Council Directive 2010/63/EC and the German Law for Protection of Animals and were approved by the local authorities, following appropriate ethics review.
Subjects. Three 6-month-old male Long-Evans rats (Charles-River, Germany) were used for the analysis of spontaneous exploratory behavior in virtual environments, and three rats were used for analysis of spontaneous exploration of real-world objects.
An additional rat was used to record hippocampal neural activity in a virtual environment, as described in the "VE Shift Experiment" section. All rats were allowed ad libitum access to water and food. All rats were extensively handled by the experimenter prior to behavioral experiments in order to minimize stress.

Behavioral experiments
We recorded the spontaneous behaviors of three rats in three virtual environments.
Each session, conducted twice per day over one week, consisted of two phases: a oneminute visual cliff session and a ten-minute arena exploration session, between which the rat was removed from the arena. Same-day sessions were separated by a minimum of 5 hours; the first in the middle of the rat's light cycle, and the second at the beginning of its dark cycle (labeled in Supp. Virtual wall experiment. During the virtual wall sessions, rats were allowed to freely explore the arena for 10 minutes. A virtual wall extended from the center of the arena, dividing it across its length (short wall) or its width (long wall). Each rat was exposed to both walls for five minutes (long followed by short wall) in a single session.
Virtual object exploration. During object exploration, rats were allowed to freely explore three different virtual objects, each roughly 6 cm in diameter and randomly selected from a pool of 11 custom-designed 3D models (Supplementary Fig. 8a). The objects were placed in the nodes of triangular configuration "corner", "wall", and "center" (Fig. 4a), which was pseudo-randomly rotated between trials. In some of these sessions, the objects displayed either a shrinking, rotating, jumping, or running animation when the rat came within 15 centimeters of the object's center, with the goal of increasing rodent engagement with the objects, although this factor was ignored in the analyses due to low sample size.  .15mg/kg); this compound also provided analgesia for the first part of the procedure. A 1.5% concentration of isoflurane in oxygen was used to maintain depth of anesthesia for the rest of the surgery. In animals used for behavioral assays, a small screw was fixed into the skull to provide support for our head post. In one rat, a silicon probe (NeuroNexus, Buzsaki 32 design, 4 shanks, 8 sites ~25um vertically spaced) was implanted following procedures described elsewhere 55 . Briefly, a cranial window of ~2 mm 2 was opened, centered on the following coordinates from bregma: with parametric tests were not tested. Size of experimental animal sample to ensure adequate power could not be determined prior to the study, since no parameters of analysis could be predicted a priory. Instead of increasing animal sample size, we repeated individual experimental sessions (virtual cliff and object exploration; virtual wall condition was not repeated due to recognized interference between VR sessions due to potential memory effect). No animals or sessions were excluded from the analysis. Randomization and blinding was not performed as all animals were subjected to all tests conditions as well as control sessions.
Behavioral state classification. Behavioral state of the rat was classified based on the speed and height of the head. Using data-derived thresholds for these variables, we defined running (speed > 3cm/s & height < 13.4cm), immobility (speed <= 3cm/s & height < 13.4cm) and rearing (speed <= 3cm/s & height > 15.4cm).
Virtual cliff avoidance analysis. Rat behavior was segmented based on headtracking data into supported rearing on the arena walls and general exploratory behavior In addition, visual exploration was associated with head dips, which were detected as trajectories of the head extended within 1cm from the board. Jumps were detected as trajectories that depart from the board and land on the floor. A rat's landing after jumping down from the board was detected based on the height of its head (threshold < 7 cm). Example sessions' time courses are shown in Supplementary   Fig. 5a. Since rats spent a variable amount of time across sessions performing supported rearing (M=24.8% of trial, SD=15.5%), likely trying to escape the arena or look outside the arena, we chose to remove these periods from the decision time estimation analysis, yielding an exploration time measure before the jump event, which we used to analyze the effect of time spent exploring the VE prior to the jump side decision behavior (Fig. 2b, Supplementary Fig. 5d). Excluding supported rearing did not make any qualitative difference in the outcome of the statistical analysis. Cliff Virtual object exploration analysis. Object exploration was quantified using a set of metrics aimed to measure rats' exploration of the virtual objects' locations. We used progressively more refined measures to quantify animals' exploration of the virtual object. First, an occupancy of the object vicinity, i.e. probability that rat is located within 15 cm from the object, was used as a crude measure to assess the general preference of the animal to be near the virtual objects. Second, the occupancy density at the object location, computed as a ratio of occupancy within 5 cm to that within 15cm of the object's center, was used to measure the selective localization of increased occupancy within the direct vicinity of the object. Third, we analyzed the proportion of trajectories that entered the vicinity of the object (10cm radius) that reached within 3cm of the virtual object' center. To control for the significance of this effect against random locomotor activity, which is naturally constrained and interacts with the arena walls, we first considered using control sessions that contained no objects within the arena. Surprisingly, we found an increased occupancy at virtual object locations compared to the rest of the arena in these sessions ( Supplementary   Fig. 8e), potentially reflecting a memory effect of the animals for the location of the objects. To avoid these inter-session interactions, all further trajectory analyses were done against a within-session control "sham" location, paired to each virtual object on the opposite side of the arena (Fig. 4a; Supplementary Fig. 8b, top). For all measures of object exploration we constructed a discrimination index [DI = (VR -Sham) / (VR + Sham)] and tested using a Wilcoxon signed-rank test for significant differences from zero. Consistent with observations in both other studies utilizing real-object discrimination tasks and our own analysis ( Supplementary Fig. 7), animal behavior in the vicinity of both the arena and virtual wall boundaries was heavily biased to thigmotaxis. In addition, we observed a high rate of supported rearing next to the walls and, especially, in the corners (Supplementary Fig. 6b). These factors heavily contaminated and made insensitive most measures of spontaneous exploration of the objects located next to the wall and in the corner. Consistently, we found that occupancy times for the object vs sham were significantly higher for the center object (Z=2.70, p<.01, Fig. 4b), but not the wall object (Z=1.42, p=.08, data not shown) nor the corner object (Z=-0.52, p=.70, data not shown). Occupancy density was significantly different from sham for the center object (Z=3.55, p<.001, Fig. 4b) and corner object (Z=2.13, p<.05, data not shown), but not the wall object (Z=0.56, p=.29, data not shown). Locomotion trajectories approaching the object (within 10 cm) were also more likely to pass through the VR objects than their sham pairs for the The rats sometimes interacted with the virtual objects and then changed their running direction. To quantify this behavior, we introduced a notion of trajectory "deflections" from the object (see Fig. 4c for trajectories examples). We analyzed the relationship of the arc angle made by trajectories entering and leaving the 10 cm circle around the object, a "deflection angle", with the shortest distance between the trajectory to the object. If a trajectory approached the object closely and its deflection angle was acute (< 90 degrees), we qualified it as a "deflecting", while obtuse (> 90 degrees) deflection angles were qualified as "crossing". As trajectories not reaching the proximity of the object are progressively associated with smaller deflection angles, we set a conservative cut-off distance of 3 cm to define a trajectory as deflecting.
Thus, deflecting trajectories are those that fall in the region of less than 3 cm and less than 1.56 radians, displayed in Supplementary Fig. 8d. We compared the proportion of "deflecting" trajectories for sham and object-containing locations using an object-label-shuffling permutation test (Supplementary Fig. 8d, right column). As the arena wall blocked trajectories for the other two object positions ("wall" and "corner"), deflection trajectory analysis was only possible for the center object. To compare this newly-introduced measure of rat interaction with the virtual object with that for the real objects, we performed an identical analysis in a separate set of data from 3 rats exploring real objects in a cylindrical arena.
In the fraction of sessions where virtual objects were programmed to be interactive, i.e. displayed either a shrinking, rotating, jumping, or running animation upon the rat's approach, we observed increased exploration in the vicinity of the objects (data not shown). As this behavior was variable across animals, our data lacked sufficient power to statistically assess this effect.
Brain state segmentation. Hippocampal activity was segmented into two states: theta and non-theta. An HMM Gaussian mixture model based on the hippocampal CA1 pyramidal layer spectral power ratio between the 6-12 Hz band and the sum of the 1-5 Hz and 15-18 Hz bands of the whitened LFP was used to separate theta and non-theta states. All further analysis of the hippocampal place cells was constrained to thetaassociated periods.

Place cells analysis.
Only hippocampal pyramidal cells with place fields that were active in the arena were included in the analysis. Spike width and firing rate were Place fields were calculated based on a k-nearest neighbor algorithm, which selected for periods in which the speed of the rat's head was greater than 5 cm/s and intersected with periods of theta oscillation state. The k-nearest neighbor estimate of the mean firing rate was calculated given the position of the rats head and each unit's smoothed firing rate. The unit firing rate was smoothed using a 800 ms rectangular window, convolved with the time-resolved spike histogram and downsampled to 30 Hz. The maze was binned with 2 cm square bins. For each bin, the smoothed unit firing rate was sorted by its distance to the bin center. The first 300 nearest-neighbor time bins were collected and averaged to derive the mean rate of that bin. Bins with less than 300 neighbors within a radius of 12.5cm were assigned to be empty. This procedure provides a data-adaptive and robust estimate of the spatial rate map in contrast to conventional estimation methods (ratio between spatially smoothed spike count and rat occupancy maps). Qualitatively, though, both measures gave the same results.
The procedure used for place field map estimation has additional benefits, as it allows robust estimation of the parameters of the place field based on the bootstrap procedure. The variance of the place field center was estimated by bootstrapping each unit's 1000 random subsets of two-second chunks of the rodent's trajectories (75% of total Trial time). The place field center at each iteration was calculated by thresholding the rate map by the firing rate at the 95 percentile of all iterations. All bins above the threshold were assigned a 1, and all other values were assigned a 0.
The above-threshold bins were segmented using Matlab's bwboundaries function into spatially contiguous patches, each of which represented a place field. The area, the rate-weighted center of mass, and the maximum and minimum firing rate were calculated for each patch. Only the main (largest and highest firing rate) field was used for further analysis. The location of the peak rate within the patch was computed for each bootstrap sample, and the resultant mean estimate was used as an unbiased estimate of the x-y position of the center of the place field and used for the further analysis of place field remapping. To quantify the effect of the VE shift on the place fields of the active population of place cells, we computed the displacement of the place field center between consecutive sessions (Normal to Shift, Shift to Normal etc). The Kruskall-Wallis test was used to find an overall difference in population means between sessions along each axis of the arena, and significant axes were probed for individual differences between sessions using a Wilcoxon paired-rank test      We found no significant relationship between statistics of the head-dips and decision side. (d) Factors affecting jump side preference. Jump decision as a function of exploration time before jump. Note the difference in accuracy between first and second sessions recorded each day. Logistic regression found a significant correlation between exploration time and safe side preference (b = -.06, p < .05, solid black line, 68% CI as gray shading). Evening sessions with jump latencies greater than 18 secs (7 out of 33 sessions) were, as a result, excluded from the analysis of jump preference. Note the mode at 0 radians close to the wall (<10cm from virtual wall), resembling thigmotaxis behavior near the real walls for both conditions. The mode close to 1.5 rad corresponds both to trajectories that "deflect from" and those that "cross" the virtual wall (Fig. 3).