Zum Hauptinhalt springen

Developing Global Spatial Memories by One-Shot Across-Boundary Navigation

Lei, Xuehui ; Mou, Weimin
In: Journal of Experimental Psychology: Learning, Memory, and Cognition, Jg. 48 (2022-06-01), Heft 6, S. 798-812
Online academicJournal

Developing Global Spatial Memories by One-Shot Across-Boundary Navigation By: Xuehui Lei
Department of Psychology, University of Alberta;
Weimin Mou
Department of Psychology, University of Alberta;

Acknowledgement: This work was funded by the Natural Sciences and Engineering Research Council of Canada to Weimin Mou. We thank Subekshya Adhikari, Jarlo Alganion, Aradhna Chawla, Lara Pereira, Mujtaba Siddique, and Qingyao Xue for their contributions to data collection.

In daily life, it is common for people to navigate between spaces that are separated by boundaries (e.g., moving between two rooms at home). Understanding whether and how people develop global spatial memory of across-boundary spaces by navigation is theoretically important (Mou & Wang, 2015; Wang & Brockmole, 2003). Recent studies have demonstrated that people can develop global representations of spatial relations between across-boundary locations (encoding the relative orientations of two rooms) through extensive across-boundary navigation (e.g., Lei et al., 2020; Lei & Mou, 2021; Shine et al., 2016; Strickrodt et al., 2019). It is not clear whether people can develop global spatial representations after they physically walk from one space to another neighboring space separated by boundaries for the first time (one-shot across-boundary navigation). The current study tackled this issue.

Understanding spatial memory acquired from across-boundary navigation is critical to understanding the specific roles of different navigation methods in developing spatial memory. In navigation, people primarily rely on two methods to update self-location (their positions and headings) and develop spatial memories. One method is path integration, in which people rely on self-motion cues (including optic flow and idiothetic cues) to continually update their self-location (Etienne et al., 1998; Etienne & Jeffery, 2004; Loomis et al., 1999; Mittelstaedt & Mittelstaedt, 1980). The other method is piloting, in which people rely on perceived landmarks to update their self-location (Etienne et al., 2004; Foo et al., 2005; Wehner et al., 1996). These two methods complement each other. Path integration can provide a metric for a spatial framework to organize landmarks (Savelli & Knierim, 2019), whereas piloting can correct, recalibrate, and also reset path integration (Etienne et al., 2004; Jayakumar et al., 2019; Zhang & Mou, 2017).

However, the exact role of path integration in developing global spatial memory is controversial in the literature. Some researchers conjecture that when piloting cues are minimal, path integration plays a critical role in developing spatial memory. In a large-scale environment, people in one space may not visually see another space. People primarily rely on path integration to encode global spatial relations between these two spaces and then integrate locations of objects in these two spaces in global spatial representations (Gallistel, 1990; Gallistel & Matzel, 2013; Jacobs & Schenk, 2003; Lei et al., 2020; Loomis et al., 1999; McNaughton et al., 2006; Meilinger, 2008). In contrast, other researchers de-emphasize the function of path integration in developing global spatial representations (e.g., Wang, 2016; Warren et al., 2017). There are two major reasons for this argument. First, path integration is error-prone, and errors in path integration are rapidly accumulated after walking complex paths in a large-scale environment. Second, path integration is primarily engaged with the local immediate space and does not keep track of self-location relative to a remote space (Wang, 2004; Wang & Brockmole, 2003). Thus, path integration may not be able to develop global spatial representations.

To differentiate between these theoretical arguments, researchers have examined the development of global spatial memories from across-boundary navigation (e.g., Lei et al., 2020; Marchette et al., 2014; Wang & Brockmole, 2003). In across-boundary spaces, researchers can minimize the influence of piloting because participants cannot directly see spatial relations between locations in two spaces separated by boundaries. Therefore, whether participants develop representations of spatial relations between two spaces separated by boundaries, compared with between two spaces not separated by boundaries, provides a stricter test on the pure role of path integration in developing global spatial memories. Recent studies have shown that in some restricted experimental situations, participants can develop global memories of spatial relations between across-boundary locations by across-boundary navigation (e.g., Lei et al., 2020; Shine et al., 2016). In their studies, the participants navigated along a simple path. They also had extensive experiences of navigating between across-boundary spaces. In addition, in Shine et al. (2016), the participants were explicitly instructed to learn the across-boundary spatial relations (orientations in one room relative to orientations in another room). In Lei et al. (2020; see also Lei & Mou, 2021), the participants could not develop global representations for spatial relations between rooms unless they had learned the environment outside the rooms before learning objects’ locations in the rooms.

The precondition of using a simple path is not surprising because it is well known that path integration is error-prone (Kelly et al., 2007; Wang & Brockmole, 2003). However, the roles of the extensive navigation experiences in developing global spatial representations are less clear. Participants in these studies (Lei et al., 2020; Shine et al., 2016) changed their locations using a joystick so they lacked idiothetic cues produced by physical translation. Studies have shown that physical translation is important for effective navigation (Ruddle et al., 2011). Thus, it is not clear whether participants who have full rotational and translational movement were able to develop global spatial representations without extensive navigation experiences, in particular after one-shot across-boundary navigation.

It is theoretically important to investigate whether the development of global spatial representations occurs after one-shot across-boundary navigation. If the development of global spatial memories after one-shot across-boundary navigation occurs, then this result will strongly support the theoretical position that people primarily rely on path integration to encode global spatial relations and develop global spatial representations (Gallistel, 1990; Gallistel & Matzel, 2013; Jacobs & Schenk, 2003; Lei et al., 2020; Loomis et al., 1999; McNaughton et al., 2006; Meilinger, 2008). If one-shot across-boundary navigation cannot lead to global spatial representations, but extensive across-boundary navigation can (Lei et al., 2020; Shine et al., 2016), then it indicates the limitation of path integration in developing global spatial representations (Wang, 2016; Warren et al., 2017). Only primitive global spatial representations are developed in earlier navigation, and these primitive global spatial representations might support later navigation. Mature global spatial representations are formed as a result of such a reciprocal relationship between navigation and spatial memory. Therefore, examining the development of global spatial representations after one-shot across-boundary navigation can provide insight into the relationship between spatial memory and navigation.

To the best of our knowledge, Kelly et al. (2007) conducted the only study examining the development of global spatial representations after one-shot across-boundary navigation. In their study, the participants learned objects’ locations in one virtual room and then physically walked through a virtual wall into another virtual room. The testing room was either visually the same or different from the learning room. In a judgment of relative direction (JRD) task, the participants adopted imagined perspectives in the learning room and pointed to target objects from the imagined perspectives using memories. The global spatial representations between the learning and testing rooms were assessed by a global sensorimotor alignment effect (i.e., better performances when the imagined perspective in the learning room and the actual perspective in the testing room were aligned than when the two perspectives were misaligned). The global sensorimotor alignment effect would indicate that people encode their actual perspectives in the testing room and the locations of objects in the learning room in the same global spatial representations (Sholl et al., 2006). Otherwise, the alignment or misalignment between their actual perspectives in the testing room and imagined perspectives in the learning room should not matter in the JRD task. Note that the JRD task itself does not require any global spatial relations because, in a JRD trial, all objects specifying the imagined perspectives and the targets are in the learning room. Therefore, any global sensorimotor alignment effect should be attributed to global spatial representations that have been formed prior to the JRD task.

Unfortunately, Kelly et al. (2007) provided mixed evidence, showing that the global sensorimotor alignment effect occurred when the testing room looked the same as the learning room but did not occur when the testing room looked different from the learning room. One possibility is that their participants had global representations, but the global representations were stronger in the visually same testing room than the visually different testing room. Han and Becker (2014) showed that the global representations were stronger when two neighborhoods shared the same color. The global sensorimotor alignment effect may only appear when the global representations are sufficiently strong. Another possibility is that their participants did not have global representations. The global sensorimotor alignment effect in the visually same testing room may have occurred because the participants, upon entering the testing room, reanchored themselves in the learning room due to visual similarity (Lei & Mou, 2021; Marchette et al., 2014; 2017; Riecke & McNamara, 2017). The reanchored heading might have been the last heading in the learning room, which was coincidental with the global relation between the learning and testing rooms, thus the reanchored heading appeared to be the global heading and the global sensorimotor alignment effect was produced.

Consequently, the current study systematically examined the extent to which the development of global spatial memories occurs by one-shot across-boundary navigation. We removed the possibility of using visual-based reanchoring by making the testing room visually different from the learning room. Furthermore, we increased the likelihood of producing stronger global spatial representations by making navigation in the virtual environments more realistic (otherwise people may ignore spatial updating). For example, the current study superimposed the virtual rooms onto the real rooms, had the participants touch the real environments to calibrate the virtual environments, and had them walk naturally through real doorways toward the neighboring testing room.

It is worth noting that, in the literature, it is even not clear whether people can update self-location relative to an array of objects across a distance but within the same room after they walk from the learning to testing positions in the same room. The null sensorimotor alignment effect when the learning and testing rooms looked different in Kelly et al. (2007) could just be due to the relatively far distance between the testing position and the objects rather than due to across-boundary walking. The current study also tackled this issue.

There were six experiments in the current study. Experiment 1 examined sensorimotor alignment effects after participants walked the same distance between the learning and testing locations within the same room (within-boundary walking) or in different rooms (across-boundary walking). Experiments 2–6 only focused on one-shot across-boundary walking. In particular, Experiments 2–3 examined factors that might affect encoding global spatial relations before testing. Experiments 4–6 examined factors in the JRD trial that might affect choosing the updated global representations or the retrieved learning-viewpoint representations in the JRD task.

Experiment 1

The primary purpose of Experiment 1 was to investigate whether people can update headings in global representations after one-shot walking across boundaries. The participants were divided into two groups, with one group walking across boundaries and the other group walking the same distance within the boundary. If there were sensorimotor alignment effects in both within- and across-boundary navigation conditions and the effects were comparable, this result would strongly support that global spatial representations could be developed by one-shot across-boundary navigation. If there was no sensorimotor alignment effect even in the condition of within-boundary navigation, this result would strongly undermine the possibility that global spatial representations could be developed by walking a distance in one-shot navigation whether navigation was within or across boundaries. In addition, a larger sensorimotor alignment effect in the condition of within-boundary walking would indicate impairing effects of boundaries on path integration. Some previous studies have shown that boundaries might not impair path integration (Mou & Wang, 2015), whereas others have suggested that boundaries might significantly impair path integration (Radvansky et al., 2010; Radvansky & Copeland, 2006; Wang & Brockmole, 2003).

Method

Participants

The study was approved by the Ethics Committee of the University of Alberta. Sixty-four university students (32 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course. Thirty-two participants (16 female) were assigned to each of the two boundary conditions. Hence, sensorimotor alignment is a within-subject variable, whereas boundary condition is a between-subjects variable. The power to detect a significant main effect of sensorimotor alignment is .78 at the alpha level of .05 using a mixed-design ANOVA, assuming the partial eta squared (ηp2) is .11 (see the Matlab code for the power analysis at https://doi.org/10.7939/r3-aqm4-3p16).

Materials and Design

The real experimental lab space had two square rooms (4.4 m by 4.4 m each) and a hallway (Figure 1A). Each room had systems of virtual environments and motion tracking. The immersive virtual environment was presented using Vizard software (WorldViz, Santa Barbara, CA) in a head-mounted display (HMD, Oculus Rift, Oculus VR, LLC., Irvine, CA). The participants’ head motions were tracked by an InterSense IS-900 motion tracking system (InterSense, Inc., MA) so that they could physically walk and turn to change their viewpoints in the virtual environment. During learning, when the participants were asked to replace the objects, they used a pointing device (an InterSense Wand) to control a virtual blue stick. In the JRD task, the participants used a joystick (Logitech Extreme 3D Pro, Newark, CA) to judge the relative direction to a target from an imagined perspective.
xlm-48-6-798-fig1a.gif

For all the participants, the learning position, testing position, and walking path were the same in the real lab space. The learning position was the center of one real lab room, and the testing position was the center of the other real lab room. The walking path was from the learning position to the testing position. The participants only saw the virtual environments and did not at any point see the real lab space. Nine virtual objects were presented on the ground, with one object in the middle and the other eight objects evenly distributed every 45° in a circle (radius = 1.8 m). The learning position was in the middle of this circular array (i.e., object 9 in Figure 1). There were also real objects placed on the ground at the locations such that the virtual objects overlapped with the real objects. These real objects were placed for the participants to physically touch to increase the reality of the virtual environments.

The across-boundary and within-boundary conditions (a between-subjects variable) had different virtual environments. In the across-boundary condition, the virtual environment consisted of two square rooms (4.4 m by 4.4 m each), with one for learning and the other for testing (Figure 1B). They overlapped with the real lab rooms. The learning position was the center of the virtual learning room, and the testing position was the center of the virtual testing room. The virtual learning and testing rooms were visually different. The virtual learning room had a door that overlapped with the door in the real lab room, and it had four white walls with hexagon patterns. The virtual testing room did not have a door, and it had four red walls with brick patterns. In the within-boundary condition, the virtual environment presented one square room (13.2 m by 13.2 m), (Figure 1C). This virtual room was created with the testing position as the center of the room and its right wall overlapping the right wall of the real lab room for learning. The virtual room did not have a door, and it had two adjacent walls that were red with brick patterns while the other two walls were white with hexagon patterns. Thus, for across-boundary and within-boundary conditions, the participants’ physical learning and testing locations and also the walking path between the locations were the same in the real lab space. The virtual environments made the learning, testing, and walking take place in across-boundary or within-boundary conditions.

Furthermore, the participants in different boundary conditions received different instructions about the ending position of their walking toward the testing position. In the across-boundary condition, the participants were told that they would walk to another position in a different room, whereas in the within-boundary condition, the participants were told that they would walk to another position within the same room. When walking outside the real lab room for learning, the participants in the across-boundary condition were instructed to touch the real door, whereas the participants in the within-boundary condition did not touch anything. In addition, after reaching the testing position, the participants in the across-boundary condition were reassured that they had walked to another position in a novel room, whereas the participants in the within-boundary condition were told that they had walked to another position in the same room.

The second independent variable (i.e., sensorimotor alignment) is specified by the relation between the participants’ actual perspective and the imagined perspective in the JRD task. The actual perspective was the participants’ physical/body perspective (Mou et al., 2004). For each JRD trial, the locations specifying the imagined perspectives and the target location were all from the remembered object array (e.g., imagine you are standing at object 4 and facing object 2, point to object 5). The independent variables and important design parameters were also summarized in Table 1.
xlm-48-6-798-tbl1a.gif

The participants’ actual perspectives were 0° and 180° at the testing position, and the imagined perspectives were also 0° and 180° inside of the remembered array of objects (see Figure 1). Depending on the alignment between the actual and imagined perspectives, there were two types of trials: sensorimotor aligned and sensorimotor misaligned (within-subject variable). Table 2 shows the actual and imagined perspectives for each trial type (aligned or misaligned in Table 2 for Experiment 1).
xlm-48-6-798-tbl2a.gif

The JRD task was blocked by the two actual perspectives. In each block, 16 trials were generated for each imagined perspective (0° or 180° in Table 3; see Table 4 for trials used in Experiment 5), producing 32 trials. The order of the blocks (i.e., the two actual perspectives) was counterbalanced across the participants, and the order of the trials within each block was randomized for each participant.
xlm-48-6-798-tbl3a.gif
xlm-48-6-798-tbl4a.gif

Therefore, this experiment used a mixed design, with one between-subjects variable (boundary condition: across-boundary, within-boundary) and one within-subject variable (sensorimotor alignment: aligned, misaligned). The dependent variables were the absolute angular error and response latency in the pointing responses of the JRD task.

Procedure

Before the experiment, the participants were led into one room (not the lab room used in the formal experiment) to sign consent forms, read instructions, and practice how to use a joystick to point. Next, the participants were blindfolded and guided on a circuitous path to the center of the real lab room for learning (i.e., the learning position, object 9 in Figure 1). They faced the learning orientation of 270° (i.e., facing the right wall in Figure 1). Then they were required to close their eyes, remove their blindfold and put on the HMD.

In the learning phase, the participants first looked around the room and went to touch the wall in front of them (i.e., the right wall in Figure 1). Then they returned to the learning position and the learning orientation, and the objects were presented. The participants named the objects with the help of the experimenter. Then, they were instructed to touch three objects (the object at 3 that was in front of them, the object at 6 that was on the walking path, and another random object). To touch each object, they started from the learning position, went to touch the object, and then returned to the learning position. Touching the wall and the objects helped the participants calibrate their movement in the virtual environment with the real lab space and also made the participants feel the virtual environment was as stable as the real environment (Mohler et al., 2006; Siegel et al., 2017; Taube et al., 2013). Next, the participants returned to the learning orientation and were given three minutes to learn the objects’ locations while standing at the learning position and facing the learning orientation. After three minutes, the objects disappeared, and the participants replaced the objects. To replace an object, the probed object with its name appeared at the center of the HMD, and the participants controlled the virtual stick to replace it. The object was shown at the replaced location and also at the correct location as feedback. The replaced locations were recorded. There were three blocks to replace the objects, and the order of the objects was randomized in each block. After this, the objects were presented until the participants reported that they had good memories of the objects’ locations. The learning phase ended.

Between the learning and testing phases, several extra steps were used to increase the likelihood that the participants updated their self-location in the virtual environments just as in the real environments. After learning and while still taking the learning viewpoint (i.e., standing at object 9 and facing object 3 as in Figure 1), the participants closed their eyes, took off the HMD, and put on the blindfold. They were instructed to use their fingers to point to some objects that were randomly named by the experimenter. Then, they were asked to turn to face object 6 (see Figure 1), and they pointed to the randomly named objects as requested. After completing this, they removed the blindfold and put on the HMD to see the virtual environment from a new viewpoint (i.e., standing at object 9 and facing object 6 as in Figure 1). To further motivate the participants to update their viewpoints, they were asked to replace all the objects once without feedback. The replaced locations were recorded. After replacing the objects, they closed their eyes to take off the HMD and put on the blindfold. Next, they were guided to walk from object 9 to object 6 (see Figure 1). Again, at the new location (object 6), they first used their fingers to point to objects named by the experimenter and then put on the HMD to replace all the objects once without feedback. After replacing the objects, they closed their eyes to take off the HMD and put on the blindfold. All these means were used to make the participants understand that the objects were stabilized relative to the environment rather than stabilized relative to their bodies during locomotion (Mou et al., 2008).

Then, the participants were instructed about the ending position of their walking, either being a different position in the same room or a different position in a novel room. When walking outside the real lab room for learning, the participants in the across-boundary condition touched the real door. The participants in both conditions were instructed to pay attention to their walking and keep track of the objects during walking. The blindfolded participants were led to walk a path (i.e., represented by the dashed lines in Figure 1) to the testing position and then were oriented to face an actual perspective (i.e., 0° or 180°, represented by the dashed arrows in Figure 1). Then, they closed their eyes, removed the blindfold, and put on the HMD in the real testing room. The participants were then told that they had walked to another position in a novel room or another position in the same room.

The testing phase started. In the testing phase, the participants stood at the testing position and were given a joystick to conduct the JRD task. For each actual perspective (i.e., 0° or 180°), they finished one block of the JRD trials. In each trial, one sentence to instruct an imagined perspective was presented at the center of the HMD screen (e.g., “standing at the lock, facing the candle”). The participants were required to keep their actual perspective and mentally take the imagined perspective. They clicked the trigger on the joystick if they took the imagined perspective. The duration between the presentation of the imagined perspective and the clicked trigger was recorded as orientation latency. After the participants clicked the trigger, the first sentence disappeared, and another sentence was presented to instruct a target object (e.g., “point to the mug”). The participants were required to keep their actual perspective and use the joystick to point to the target from the imagined perspective. They were asked to respond as fast as possible without sacrificing accuracy. The duration between the presentation of the target and the response was recorded as response latency. The response direction was also recorded to calculate the absolute angular pointing error. After the participants responded, the second sentence disappeared. The next trial started after 750 ms.

Results

We calculated the mean orientation latency, mean response latency, and mean absolute angular pointing error in each trial type. We conducted ANOVAs for all these measures with one between-subjects factor (boundary condition: across-boundary, within-boundary) and one within-subject factor (sensorimotor alignment: aligned, misaligned).

There were no significant effects for orientation latency in all experiments of the current study (Figure S1 in the online supplementary materials). Thus, for this and the following experiments, we only report detailed results from response latency and absolute pointing error.

Response Latency

Figure 2 shows the mean response latency for each sensorimotor alignment and each boundary condition. The main effect of boundary was not significant, F(1, 62) = 1.77, p = .189, ηp2 = .03. The main effect of sensorimotor alignment was significant, F(1, 62) = 12.09, p = .001, ηp2 = .16 (comparable to Cohen’s d = .62), showing that the responses in the aligned trials were faster than those in the misaligned trials. The interaction between boundary and sensorimotor alignment was not significant, F(1, 62) = .00, p = .995, ηp2 = .00, showing that the sensorimotor alignment effect was not different in across-boundary and within-boundary conditions. A Bayesian t test comparing the sensorimotor alignment effects (i.e., the difference in response latency between the aligned and misaligned trials) in across-boundary and within-boundary conditions (using IBM SPSS 26 with a JZS prior) also favored the null effect over the alternative, BF01 = 5.30.
xlm-48-6-798-fig2a.gif

In addition, as our primary focus was the sensorimotor alignment effect, we also assessed it for each boundary condition. We conducted paired sample t tests between the aligned and misaligned trials in each boundary condition. In both across- and within-boundary conditions, responses were significantly faster in the aligned than misaligned trials (t(31) = 2.20, p = .036, Cohen’s d = .55; t(31) = 2.85, p = .008, Cohen’s d = .71, respectively), demonstrating sensorimotor alignment effects.

Absolute Pointing Error

Figure 3 shows the mean absolute angular pointing error as a function of sensorimotor alignment and boundary condition. The main effect of boundary was not significant, F(1, 62) = .89, p = .349, ηp2 = .01. The main effect of alignment was significant, F(1, 62) = 7.20, p = .009, ηp2 = .10 (comparable to Cohen’s d = .48), showing more accurate responses in the aligned trials than in the misaligned trials. The interaction between boundary and sensorimotor alignment was not significant, F(1, 62) = .80, p = .374, ηp2 = .01, showing that the sensorimotor alignment effect was not different in across-boundary and within-boundary conditions. The Bayes factor (BF01 = 3.67) supported the null interaction effect.
xlm-48-6-798-fig3a.gif

We also examined the sensorimotor alignment effect for each boundary condition. In the across-boundary condition, responses in the aligned trials were more accurate than those in the misaligned trials, t(31) = 2.06, p = .048, Cohen’s d = .51, showing a sensorimotor alignment effect. In the within-boundary condition, there were no significant differences between the aligned and misaligned trials, t(31) = 1.80, p = .081, Cohen’s d = .45.

Discussion

The results in Experiment 1 showed comparable sensorimotor alignment effects in within-boundary and across-boundary conditions, demonstrating that the participants updated their global headings by one-shot walking equally well when walking across boundaries and walking within the same boundary. These results support that people can update headings relative to a global environment and develop global spatial representations by one-shot walking. In addition, boundaries do not impair updating in the global environment. The following experiments (2–6) were only centered on one-shot across-boundary walking and further examined factors that could affect updating global headings and developing global representations.

Experiments 2–3 tested two factors that might affect the global updating of self-location. Specifically, the first factor was the instruction for attention and tracking the objects in across-boundary walking, which might have explicitly required the participants to relate their self-location on the walking path with the objects in the learning room. The second factor was the existence of the door in the virtual learning room, which might have served as a visual cue to provide navigational affordance linking to another space and might have helped the development of global memories across boundaries.

Experiment 2

In Experiment 1, the participants were instructed to pay attention to walking and keep track of the objects during walking. Experiment 2 tested whether the instruction to attend to walking and track the objects was essential to update headings relative to a global environment. Previous studies have shown that spatial updating of headings relative to immediate spaces appears to be automatic (Farrell & Robertson, 1998; Rieser, 1989). However, Wang (2004) showed that updating relative to a remote space (an imagined space) seems to not be automatic. The current Experiment 2 removed these instructions for attention to the updating process. If the results still showed a sensorimotor alignment effect, then global updating and developing global representations by one-shot across-boundary walking is automatic, in the sense that it does not require explicit instructions for attention, whereas if the results showed no sensorimotor alignment effect, then attention to the updating process is needed to update global headings after one-shot walking across boundaries.

Method

Participants

Thirty-two university students (16 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course. The power was .66 at the alpha level of .05 for 32 participants to detect ηp2 = .16, which was the observed effect size for the sensorimotor alignment effect in Experiment 1.

Materials, Design, and Procedure

The materials, design, and procedure were the same in Experiment 2 as for the across-boundary condition in Experiment 1 except that, prior to walking, the participants did not receive the instruction to pay attention to walking and keep track of the objects during walking.

Results

Response Latency

Figure 2 plots the mean response latency for each sensorimotor alignment. The responses in the aligned trials were significantly faster than those in the misaligned trials, t(31) = 2.41, p = .022, Cohen’s d = .60 (comparable to ηp2 = .15), demonstrating a sensorimotor alignment effect.

Absolute Pointing Error

Figure 3 shows the results in the mean absolute angular pointing error. The responses in the aligned trials were significantly more accurate than those in the misaligned trials, t(31) = 2.58, p = .015, Cohen’s d = .64 (comparable to ηp2 = .17), demonstrating a sensorimotor alignment effect.

Discussion

The results in Experiment 2 showed a sensorimotor alignment effect, suggesting that updating and developing global representations by one-shot across-boundary walking is automatic in the sense that it does not require explicit instruction for attention to the updating process.

Experiment 3

Experiment 3 tested whether a visual cue indicating navigational affordance to other spaces is important to updating headings relative to global relations and developing global memories after one-shot across-boundary walking. Specifically, it tested whether the door of the learning room is important for updating headings relative to global relations. Previous studies have shown that, in scene perception, people automatically identify navigational affordance in a scene, which is the identification of where one can move to, such as to a door or an unobstructed path (Bonner & Epstein, 2017; Greene & Oliva, 2009). In Experiments 1–2, the door of the learning room might have provided navigational affordance to another space. This might have helped to support updating relative to global relations and developing global memories. When participants walked through virtual walls instead of doors, the global updating process might have been impaired (Kelly et al., 2007). Experiment 3 removed the door in the virtual learning room. If the results still showed a sensorimotor alignment effect, then the visual cues for navigational affordance between spaces are not important to global updating and developing global memories based on one-shot across-boundary walking.

Method

Participants

Thirty-two university students (16 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course.

Materials, Design, and Procedure

The materials, design, and procedure were the same in Experiment 3 as for the across-boundary condition in Experiment 1, except that there was no door in the virtual learning room, and the participants did not touch the door of the real lab room when walking outside the learning room.

Results

Response Latency

Figure 2 shows the results of the mean response latency. The responses in the aligned trials were significantly faster than those in the misaligned trials, t(31) = 2.38, p = .024, Cohen’s d = .60, demonstrating a sensorimotor alignment effect.

Absolute Pointing Error

Figure 3 shows the results of the mean absolute angular pointing error. The responses in the aligned trials were not significantly different from those in the misaligned trials, t(31) = 1.44, p = .161, Cohen’s d = .36, although the trend was consistent with a sensorimotor alignment effect.

Discussion

The results in Experiment 3 showed a sensorimotor alignment effect, suggesting that visual cues indicating navigational affordance between spaces are not necessary to update headings relative to global relations and develop global representations by one-shot across-boundary walking.

Experiments 1–3 consistently showed sensorimotor alignment effects after one-shot across-boundary walking, indicating that the participants developed global representations by one-shot walking and also relied on the global representations in the JRD task. In contrast, in Kelly et al. (2007), the participants did not show sensorimotor alignment effects after one-shot walking into a visually and spatially different room. The participants in their study might also have developed global memories. However, some properties of the JRD task might have made the participants in their study only rely on the retrieved learning-viewpoint representations from long-term memory (i.e., encoding their original learning viewpoint relative to the object array) instead of the global representations developed by walking.

Experiments 4–6 examined three factors of JRD trials that might modulate the use of the updated global representations or the retrieved learning-viewpoint representations from long-term memory. Specifically, Experiment 4 examined the first factor of including the learning orientation as one of the imagined perspectives, as including the learning orientation might activate the learning-viewpoint representations in long-term memory. The second factor was to let the participants imagine themselves standing at the learning position and then conduct egocentric pointing to make the testing scenario more similar to the learning scenario. The third factor was to increase the task difficulty by testing more imagined perspectives. The learning-viewpoint representations in long-term memory were well developed during learning compared with the global representations developed by walking. When the number of imagined perspectives increased, taking imagined perspectives might be easier by using the learning-viewpoint representations in long-term memory rather than using global representations.

Experiment 4

Experiment 4 tested whether including the learning orientation as one of the imagined perspectives in the JRD task would affect the use of the global representations developed by one-shot across-boundary walking. Since the learning orientation was encoded in the originally formed learning-viewpoint spatial representations in long-term memory, including the learning orientation as an imagined perspective might encourage the use of the learning-viewpoint representations and discourage the use of the global representations. All previous experiments in the current study excluded the learning orientation from the imagined perspectives in the JRD trials (see Table 1), and this exclusion might have led to clear sensorimotor alignment effects.

In Experiment 4, after across-boundary walking, the participants conducted the task with the imagined perspectives either including the learning orientation or excluding the learning orientation. If including the learning orientation as an imagined perspective does not influence the use of global representations, then there would be sensorimotor alignment effects whether the imagined perspectives included or excluded the learning orientation. By contrast, if including the learning orientation as an imagined perspective impairs the use of global representations, then there would be a sensorimotor alignment effect only when the imagined perspectives excluded the learning orientation.

Method

Participants

Sixty-four university students (32 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course. Thirty-two of them (16 female) were assigned to each of the conditions of including or excluding the learning orientation.

Materials, Design, and Procedure

The materials, design, and procedure were the same in Experiment 4 as for the across-boundary condition in Experiment 1 except for the following differences. First, the learning orientation was manipulated to be either 90° or 270° for the conditions of the learning orientation as included or excluded in the imagined perspectives. Second, the imagined perspectives were 0°, 90°, and 180°. Thus, in addition to the two types of trials used in Experiments 1 and 2 (i.e., aligned and misaligned), there was an additional type of trial: imagined 90 (see Table 2). As a result, the group of participants who learned at 90° would have imagined perspectives including the learning orientation, while those who learned at 270° would have imagined perspectives excluding the learning orientation. For imagined 90, there were also 16 trials (see Table 3), producing 48 trials in total for each of the two blocks.

Therefore, this experiment used a mixed design, with one between-subjects variable (learning orientation: included, excluded) and one within-subject variable (trial type: aligned, misaligned, imagined 90).

Results

We conducted ANOVA with one between-subjects factor (learning orientation: included, excluded) and one within-subject factor (trial type: aligned, misaligned, imagined 90) on mean orientation latency, mean response latency, and mean absolute angular pointing error.

Response Latency

Figure 2 shows the mean response latency for each learning orientation condition and for each trial type. The main effect of learning orientation was not significant, F(1, 62) = 1.81, p = .184, ηp2 = .03. The main effect of trial type was significant, F(2, 124) = 7.74, p = .001, ηp2 = .11. The interaction between learning orientation and trial type was not significant, F(2, 124) = 2.10, p = .127, ηp2 = .03. Pairwise comparisons showed that the aligned trials were significantly faster than the misaligned trials, t(63) = 3.49, p = .001, Cohen’s d = .62; the imagined 90 trials were also significantly faster than the misaligned trials, t(63) = 2.71, p = .009, Cohen’s d = .48; however, the aligned trials were not different from the imagined 90 trials, t(63) = .99, p = .326, Cohen’s d = .17. These results showed sensorimotor alignment effects for both groups of the participants whether the learning orientation was included or excluded in the imagined perspectives.

In addition, we conducted paired sample t tests among the trial types (i.e., aligned, misaligned, and imagined 90) in each learning orientation condition (i.e., learning orientation included or excluded). In the condition of learning orientation included, aligned trials were significantly faster than misaligned trials, t(31) = 2.18, p = .037, Cohen’s d = .54, showing a sensorimotor alignment effect; imagined 90 trials were significantly faster than misaligned trials, t(31) = 3.51, p = .001, Cohen’s d = .88, showing better performances from the learning orientation; imagined 90 trials were not different from aligned trials, t(31) = .96, p = .346, Cohen’s d = .24, showing compatible performances from the aligned perspectives and the learning orientation. In the condition of learning orientation excluded, aligned trials were significantly faster than misaligned trials, t(31) = 2.78, p = .009, Cohen’s d = .69, showing a sensorimotor alignment effect; imagined 90 trials were not different from misaligned trials, t(31) = 1.17, p = .252, Cohen’s d = .29; imagined 90 trials were significantly slower than aligned trials, t(31) = 2.47, p = .019, Cohen’s d = .62.

Absolute Pointing Error

Figure 3 shows the mean pointing error for each learning orientation condition and for each trial type. The main effect of learning orientation was not significant, F(1, 62) = 1.08, p = .302, ηp2 = .02. The main effect of trial type was not significant, F(2, 124) = 3.05, p = .051, ηp2 = .05. The interaction between learning orientation and trial type was not significant, F(2, 124) = 2.31, p = .103, ηp2 = .04. Pairwise comparisons showed that the aligned trials were significantly faster than the misaligned trials, t(63) = 2.63, p = .011, Cohen’s d = .47; however, the other two comparisons were not significant (imagined 90 versus misaligned trials: t(63) = 1.84, p = .070, Cohen’s d = .33; aligned versus imagined 90 trials, t(63) = .51, p = .609, Cohen’s d = .09). These results showed sensorimotor alignment effects for both groups of the participants whether the learning orientation was included or excluded as an imagined perspective.

In addition, we conducted paired sample t tests in each learning orientation condition. In the condition of learning orientation included, aligned trials were not different from misaligned trials, t(31) = 1.62, p = .115, Cohen’s d = .41; imagined 90 trials were significantly more accurate than misaligned trials, t(31) = 2.71, p = .011, Cohen’s d = .68, showing better performances from the learning orientation; imagined 90 trials were not different from aligned trials, t(31) = 1.15, p = .258, Cohen’s d = .29, showing compatible performances from the aligned perspectives and the learning orientation. In the condition of learning orientation excluded, aligned trials were significantly faster than misaligned trials, t(31) = 2.05, p = .049, Cohen’s d = .51, showing a sensorimotor alignment effect; imagined 90 trials were not different from misaligned trials, t(31) = .15, p = .881, Cohen’s d = .04; imagined 90 trials were not different from aligned trials, t(31) = 1.43, p = .163, Cohen’s d = .36.

Discussion

The results in Experiment 4 showed sensorimotor alignment effects in both conditions when the imagined perspectives included and excluded the learning orientation. This suggests that whether or not the learning orientation was included as one of the imagined perspectives does not influence the use of the global representations developed by one-shot walking across boundaries.

Experiment 5

In Experiments 1–4, participants performed allocentric pointing in which their imagined standing positions were varied for each imagined perspective (see Table 3). Although Experiment 4 included the learning orientation in the imagined perspectives, the imagined positions were different from the original learning position (i.e., object 9 in Figure 1) in the majority of trials (10 out of 16 trials for imagined perspective 90° in Table 3). One may argue that the learning-viewpoint spatial representations formed in the learning phase are more likely to be used instead of the updated global representations in the JRD task when both the imagined position and orientation are the same as the learning position and orientation. Kelly et al. (2007) asked the participants to perform egocentric pointing by always imagining standing at the learning position and taking different imagined perspectives (e.g., “imagine facing A,” “point to B”). The egocentric pointing from the learning position, which was more similar to the learning scenario, might encourage the participants to use the learning-viewpoint spatial representations in long-term memory developed from the learning viewpoint. This might have suppressed the use of the global representations that had been developed by one-shot across-boundary walking.

Experiment 5 asked the participants to perform egocentric pointing by always imagining standing at the learning position and taking different imagined perspectives (e.g., “imagine facing the mug,” “point to the wood”). If the participants did not show a sensorimotor alignment effect, then the egocentric pointing would discourage the use of global representations after one-shot across-boundary walking.

Method

Participants

Thirty-two university students (16 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course.

Materials, Design, and Procedure

The materials, design, and procedure were the same in Experiment 5 as for the group that included the learning orientation in Experiment 4 except for the following differences. First, the participants were instructed to imagine standing at the learning position (i.e., object 9 in Figure 1) in the learning room to conduct the JRD task. Accordingly, for each trial, the sentence that instructed an imagined perspective only mentioned the facing object but not the standing object (e.g., “imagine facing the mug”). Second, for each of the three imagined perspectives (i.e., 0°, 90°, and 180°, which correspond to standing at 9 and imagining facing 1/7/5 in Figure 1), seven trials were generated using all of the other seven objects as targets (e.g., if imagining facing 1, then all possible targets were 2–8) (see Table 4). To increase power, there were two blocks of these trials for each of the two actual perspectives. The trials were randomized in each block. Thus, there were 42 trials for each actual perspective (14 for each trial type, i.e., aligned, misaligned, or imagined 90).

Results

We conducted ANOVAs with one within-subject factor (trial type: aligned, misaligned, imagined 90).

Response Latency

Figure 2 shows the mean response latency for each trial type. The main effect of trial type was significant, F(2, 62) = 9.01, p < .001, ηp2 = .23. Pairwise comparisons showed that the aligned trials were significantly faster than the misaligned trials, t(31) = 2.12, p = .042, Cohen’s d = .53; the imagined 90 trials were also significantly faster than the misaligned trials, t(31) = 4.37, p < .001, Cohen’s d = 1.09; however, the aligned trials were significantly slower than the imagined 90 trials, t(31) = 2.07, p = .047, Cohen’s d = .52. These results showed a sensorimotor alignment effect in addition to the effect from the benefit of the learning orientation (i.e., 90°).

Absolute Pointing Error

Figure 3 plots the mean absolute angular pointing error. The main effect of trial type was significant, F(2, 62) = 4.56, p = .014, ηp2 = .13. Pairwise comparisons showed the only significant comparison was that the imagined 90 trials were significantly more accurate than the misaligned trials, t(31) = 3.27, p = .003, Cohen’s d = .82. The aligned trials were not significantly different from the misaligned trials (t[31] = 1.27, p = .215, Cohen’s d = .32) or the imagined 90 trials (t[31] = 1.61, p = .118, Cohen’s d = .40).

Discussion

The results in Experiment 5 showed a sensorimotor alignment effect from a JRD task only using egocentric pointing. This suggests that the use of the global representations developed by one-shot across-boundary walking does not rely on the task requirement for egocentric pointing or not.

Experiment 6

Experiment 6 tested whether more imagined perspectives would affect the use of global representations developed by one-shot across-boundary walking. The representations of objects’ locations encoded at the learning viewpoint in long-term memory should be well-developed and enduring since the participants extensively learned the objects at the learning viewpoint. By contrast, the global representations developed by one-shot across-boundary walking might be coarser and transient. It is possible that people would prefer well-developed and enduring spatial representations over coarser and transient spatial representations when the JRD task becomes more complex (e.g., with increased and more varied perspectives). In Experiment 6, the participants were tested with four imagined perspectives, which was a higher number of imagined perspectives compared with two in Experiments 1–3 and three in Experiments 4–5. If the participants still showed a sensorimotor alignment effect, then this result would suggest that the increased complexity of the imagined perspectives in testing does not affect the use of the global representations.

Method

Participants

Thirty-two university students (16 female) with normal or corrected-to-normal vision participated to partially fulfill the requirement for an introductory psychology course.

Materials, Design, and Procedure

The materials, design, and procedure were the same in Experiment 6 as for the group that included the learning orientation in Experiment 4 except that the imagined perspective of 270° was added to the JRD task (see the trial type of imagined 270 in Table 2 and trial information in Table 3) and thus there were 64 trials for each of the two blocks in the JRD task.

Results

We conducted ANOVAs with one within-subject factor (trial type: aligned, misaligned, imagined 90, imagined 270).

Response Latency

Figure 2 plots the mean response latency for each trial type. The main effect of trial type was significant, F(3, 93) = 8.72, p < .001, ηp2 = .22. Pairwise comparisons showed that the aligned trials were significantly faster than both the misaligned trials and the imagined 270 trials (t(31) = 3.07, p = .004, Cohen’s d = .77; t(31) = 2.69, p = .011, Cohen’s d = .67, respectively), but the aligned trials were not different from the imagined 90 trials (t[31] = 1.17, p = .252, Cohen’s d = .29). The imagined 90 trials were significantly faster than both the misaligned trials and the imagined 270 trials (t(31) = 5.04, p < .001, Cohen’s d = 1.26; t(31) = 3.20, p = .003, Cohen’s d = .80, respectively). The misaligned trials and the imagined 270 trials were not different from each other (t[31] = .74, p = .465, Cohen’s d = .18). These results showed a sensorimotor alignment effect in addition to the learning orientation effect.

Absolute Pointing Error

Figure 3 shows the mean absolute angular pointing error. The main effect of trial type was significant, F(3, 93) = 4.17, p = .008, ηp2 = .12. Pairwise comparisons showed that the participants were significantly more accurate in the aligned trials than in the misaligned trials and the imagined 270 trials (t[31] = 2.12, p = .042, Cohen’s d = .53; t[31] = 2.28, p = .030, Cohen’s d = .57, respectively), but the aligned trials were not different from the imagined 90 trials (t[31] = 1.40, p = .172, Cohen’s d = .35). The responses in the imagined 90 trials were significantly more accurate than those in the misaligned trials and the imagined 270 trials (t[31] = 3.03, p = .005, Cohen’s d = .76; t[31] = 2.18, p = .037, Cohen’s d = .54, respectively). The misaligned trials and the imagined 270 trials were not different from each other (t[31] = .21, p = .835, Cohen’s d = .05). These results showed a sensorimotor alignment effect in addition to the learning orientation effect.

Discussion

The results in Experiment 6 showed a sensorimotor alignment effect, suggesting that the increased variability of the imagined perspectives in testing does not affect the use of the global representations developed by one-shot across-boundary walking.

General Discussion

The current study examined developing spatial representations of a global environment by one-shot across-boundary walking. The most important finding was that global sensorimotor alignment effects occurred after one-shot across-boundary walking. Furthermore, this global sensorimotor alignment effect was comparable with the effect after one-shot walking within the same room. In addition, this global sensorimotor alignment effect occurred regardless of instructions for attention and tracking the objects in the learning room, visual cues of the door to another room, inclusion of the learning orientation in the testing trials, egocentric/allocentric pointing in the task, and the number of the imagined perspectives in the task.

The current study for the first time demonstrates that people can update self-location relative to a global environment including two separate rooms and develop global representations, by one-shot across-boundary walking. In addition, updating global headings during novel across-boundary walking seems automatic in the sense that it does not require explicit instructions to keep track of the original environment or visual navigation affordance to another room (i.e., the door). The use of global representations developed by novel across-boundary walking may also be automatic in the sense that the variables to encourage the use of the learning-viewpoint representations that are formed during learning and stored in long-term memory do not impair the use of global representations to mentally adopt perspectives in the original environment. These results implicate that it may be obligatory to develop global memories and update self-location using global relations in one-shot across-boundary walking.

The demonstration that people can develop global representations after one-shot across-boundary walking provides insight into the relationship between spatial memory and navigation. To conceptualize how people develop spatial memory in a large-scale environment in which people may not directly see spatial relations between two local spaces, some researchers have proposed that people rely on path integration to develop global spatial memory (Gallistel, 1990; Gallistel & Matzel, 2013; Jacobs & Schenk, 2003; Lei et al., 2020; Loomis et al., 1999; McNaughton et al., 2006; Meilinger, 2008). However, other researchers have argued that global spatial memory may not be developed by path integration as path integration is error-prone and may only focus on the immediate space (e.g., Wang, 2016; Warren et al., 2017). Thus, the current study provides evidence supporting that people can rely on path integration to develop global spatial memory. Note that the current study only demonstrates that people can rely on path integration to develop global spatial memory of two adjacent rooms after walking a relatively simple path. It is still not clear to what extent people can develop global spatial memory after walking a complex path. It is also not clear whether developing global spatial memory after walking a complex path requires extensive navigation experiences and reciprocal interaction between navigation and spatial memory. Future studies are required to understand the role of path complexity and navigation experiences in developing global spatial memory through navigation in a more complex environment.

Previous studies have shown difficulty in developing global representations of multiscale spaces, even after extensive navigation experiences. People may only develop local representations for individual spaces without encoding global relations, and they may shift between local representations when navigating across spaces without relying on global relations (Marchette et al., 2014; Wang & Brockmole, 2003). Developing global representations requires some prerequisites, for example, some prior learning of the global environment or explicit instructions to encode global relations (Han & Becker, 2014; Lei et al., 2020; Shine et al., 2016). We speculate that the inconsistency between the current and previous findings may be reconciled by the complexity of large-scale environments and also by the availability of idiothetic cues during navigation.

First, the number of individual spaces may influence the complexity of large-scale environments. In the current study, the environment only had two rooms with a simple walking path between the rooms. Some previous studies may have used more complex large-scale environments with more individual spaces and more paths between the spaces, for example, a university campus (Wang & Brockmole, 2003) or a large park with four museums (Marchette et al., 2014). The increased number of individual spaces and the increased complexity of the paths linking individual spaces may impair updating self-location relative to global relations and developing global memories, due to the limited capacity in working memory to track spatial relations to multiple spaces (Cowan, 2010) and also the errors accumulated in path integration (Etienne & Jeffery, 2004).

Second, local spaces that are visually similar but globally misaligned may also interfere with developing global representations between local spaces. People can form schematic representations for geometrically equivalent local spaces (Lei et al., 2020; Marchette et al., 2014; 2017). When local reference directions of two spaces (e.g., the major axis of a rectangular room) are globally misaligned, people may be more likely to rely on local representations (e.g., visual-based reanchoring, according to Riecke & McNamara, 2017) rather than global representations to update self-location. In the current study, because the learning and testing rooms were both square rooms, there were no conflicting local reference directions in different rooms. The participants could only rely on global representations for self-localization. Future studies are needed to test whether people can still update self-location relative to the global environment by one-shot walking across spaces when the two spaces are locally similar but globally misaligned.

Third, the participants in the current study physically walked across boundaries, which means they had idiothetic information for both translation and rotation in navigation. However, the participants in some previous studies only navigated with visual cues, such as by using a keyboard to navigate in a desktop virtual environment (e.g., Marchette et al., 2014), or with rotational idiothetic cues, such as by physically rotating but using a joystick to visually translate in a virtual environment (e.g., Lei et al., 2020). Previous studies on the contributions of locomotion modes have shown that idiothetic information during navigation is important to path integration and spatial knowledge acquisition (Chance et al., 1998; Chrastil & Warren, 2013; Klatzky et al., 1998; Rieser, 1989; Waller et al., 2004). For a large-scale environment, translational idiothetic information may be more important than rotational idiothetic information to encode accurate directions and distances in cognitive maps (Ruddle et al., 2011). Thus, the availability of idiothetic information for translation and rotation during navigation may affect the function of path integration to update and develop global memories by one-shot across-boundary navigation.

The experiments in the current study consistently showed sensorimotor alignment effects after the participants physically walked from the learning room to the neighboring testing room. In contrast, Kelly et al. (2007) showed mixed results. Although they also had the participants physically walk from the learning room to a novel testing room, the results did not show sensorimotor alignment effects unless the testing room looked similar to the learning room. We speculated that participants’ choices of representations might have caused the mixed results. Participants could use the learning-viewpoint representations, which were encoded in the learning room and stored in long-term memory (Shelton & Marchette, 2010), or the updated self-localization representations in the global environment. Whether people use the global or the learning-viewpoint representations depends on how strong the global representations are. Their mixed results might have occurred due to stronger global representations in the visually same testing room than in the visually different testing room (Han & Becker, 2014). The current study used a visually different testing room. However, our participants might still have used the updated global representations because the current study increased the strength of global representations by making navigation in the virtual environments more realistic (e.g., asking the participants to move to touch the real wall). In addition, the current study doubled the sample size used in Kelly et al. (2007) (i.e., increasing from 16 to 32), which increased the power to detect a medium-sized global sensorimotor alignment effect observed in the current study (Cohen’s d was about .6, see Figure 2).

Although the global sensorimotor alignment effects in the current study are sufficient to conclude the existence of global representations, a lack of such effects is not conclusive evidence for a lack of global representations. People may develop global representations between two rooms but may not show the global sensorimotor alignment effect in some situations, for example, when the two rooms are distant. Instead of simply using the global sensorimotor alignment effect to examine the existence of global representations, it is more meaningful to systematically examine the factors that can modulate the global sensorimotor alignment effect, such as attention to global spatial relations (Lei & Mou, 2021; Sholl et al., 2006).

In conclusion, the current study showed global sensorimotor alignment effects after the participants physically walked once from the learning room to the testing room in a novel environment. These results indicate that people can update self-location relative to an adjacent room and develop global memories of a multiroom environment by one-shot across-boundary walking. Boundaries may not impair updating and developing global memories by one-shot walking. In addition, encoding and using global representations are robust to various encoding and retrieval manipulations.

Footnotes

1  ηp2 of 0.11 in a F(1, 62) test is comparable to Cohen’s d of 0.5, a medium effect. . N is the participant number in each boundary condition.

2  The null effect is favoured if the BF01 is larger than 3 and strongly favoured if the BF01 is larger than 10. The alternative effect is favoured if the BF01 is smaller than 1/3 and strongly favoured if the BF01 is smaller than 1/10 (Rouder et al., 2009). Neither is favoured if the BF01 is between 1/3 and 3.

3  SE removing the variance from individual differences was obtained in the following equations: SE, where MSE was the within-subject MSE in ANOVA conducted in each condition and N was the subject number in each condition; or SE (t × ; where Mean difference was the absolute mean difference between the aligned and misaligned trials and t was the t value in the paired sample t test between the aligned and misaligned trials.

References

Bonner, M. F., & Epstein, R. A. (2017). Coding of navigational affordances in the human visual system. Proceedings of the National Academy of Sciences, USA, USA of the United States of America, 114(18), 4793–4798. 10.1073/pnas.1618228114

Chance, S. S., Gaunet, F., Beall, A. C., & Loomis, J. M. (1998). Locomotion mode affects the updating of objects encountered during travel: The contribution of vestibular and proprioceptive inputs to path integration. Presence, 7(2), 168–178. 10.1162/105474698565659

Chrastil, E. R., & Warren, W. H. (2013). Active and passive spatial learning in human navigation: Acquisition of survey knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(5), 1520–1537. 10.1037/a0032382

Cowan, N. (2010). The magical mystery four: How is working memory capacity limited, and why?Current Directions in Psychological Science, 19(1), 51–57. 10.1177/0963721409359277

Etienne, A. S., & Jeffery, K. J. (2004). Path integration in mammals. Hippocampus, 14(2), 180–192. 10.1002/hipo.10173

Etienne, A. S., Maurer, R., Berlie, J., Reverdin, B., Rowe, T., Georgakopoulos, J., & Séguinot, V. (1998). Navigation through vector addition. Nature, 396(6707), 161–164. 10.1038/24151

Etienne, A. S., Maurer, R., Boulens, V., Levy, A., & Rowe, T. (2004). Resetting the path integrator: A basic condition for route-based navigation. The Journal of Experimental Biology, 207(Pt. 9), 1491–1508. 10.1242/jeb.00906

Farrell, M. J., & Robertson, I. H. (1998). Mental rotation and automatic updating of body-centered spatial relationships. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24(1), 227–233. 10.1037/0278-7393.24.1.227

Foo, P., Warren, W. H., Duchon, A., & Tarr, M. J. (2005). Do humans integrate routes into a cognitive map? Map- versus landmark-based navigation of novel shortcuts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(2), 195–215. 10.1037/0278-7393.31.2.195

Gallistel, C. R. (1990). The organization of learning. The MIT Press.

Gallistel, C. R., & Matzel, L. D. (2013). The neuroscience of learning: Beyond the Hebbian synapse. Annual Review of Psychology, 64, 169–200. 10.1146/annurev-psych-113011-143807

Greene, M. R., & Oliva, A. (2009). Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive Psychology, 58(2), 137–176. 10.1016/j.cogpsych.2008.06.001

Han, X., & Becker, S. (2014). One spatial map or many? Spatial coding of connected environments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 511–531. 10.1037/a0035259

Jacobs, L. F., & Schenk, F. (2003). Unpacking the cognitive map: The parallel map theory of hippocampal function. Psychological Review, 110(2), 285–315. 10.1037/0033-295X.110.2.285

Jayakumar, R. P., Madhav, M. S., Savelli, F., Blair, H. T., Cowan, N. J., & Knierim, J. J. (2019). Recalibration of path integration in hippocampal place cells. Nature, 566(7745), 533–537. 10.1038/s41586-019-0939-3

Kelly, J. W., Avraamides, M. N., & Loomis, J. M. (2007). Sensorimotor alignment effects in the learning environment and in novel environments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(6), 1092–1107. 10.1037/0278-7393.33.6.1092

Klatzky, R. L., Loomis, J. M., Beall, A. C., Chance, S. S., & Golledge, R. G. (1998). Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychological Science, 9(4), 293–298. 10.1111/1467-9280.00058

Lei, X., & Mou, W. (2021). Updating self-location by self-motion and visual cues in familiar multiscale spaces. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. 10.1037/xlm0000992

Lei, X., Mou, W., & Zhang, L. (2020). Developing global spatial representations through across-boundary navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(1), 1–23. 10.1037/xlm0000716

Loomis, J. M., Klatzky, R. L., Golledge, R. G., & Philbeck, J. W. (1999). Human navigation by path integration. In R. G.Golledge (Ed.), Wayfinding: Cognitive mapping and other spatial processes (pp. 125–151). Johns Hopkins University Press.

Marchette, S. A., Ryan, J., & Epstein, R. A. (2017). Schematic representations of local environmental space guide goal-directed navigation. Cognition, 158, 68–80. 10.1016/j.cognition.2016.10.005

Marchette, S. A., Vass, L. K., Ryan, J., & Epstein, R. A. (2014). Anchoring the neural compass: Coding of local spatial reference frames in human medial parietal lobe. Nature Neuroscience, 17(11), 1598–1606. 10.1038/nn.3834

McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., & Moser, M. B. (2006). Path integration and the neural basis of the ‘cognitive map’. Nature Reviews Neuroscience, 7(8), 663–678. 10.1038/nrn1932

Meilinger, T. (2008). The network of reference frames theory: A synthesis of graphs and cognitive maps. In C.Freksa, N. S.Newcombe, P.Gärdenfors, & S.Wölfl (Eds.), Spatial Cognition VI. Learning, Reasoning, and Talking about Space (pp. 344–360). Springer. 10.1007/978-3-540-87601-4_25

Mittelstaedt, M. L., & Mittelstaedt, H. (1980). Homing by path integration in a mammal. Naturwissenschaften, 67(11), 566–567. 10.1007/BF00450672

Mohler, B. J., Creem-Regehr, S. H., & Thompson, W. B. (2006). The influence of feedback on egocentric distance judgments in real and virtual environments. Proceedings of the Third SIGGRAPH Symposium on Applied Perception in Graphics and Visualization (pp. 9–14). ACM Press. 10.1145/1140491.1140493

Mou, W., & Wang, L. (2015). Piloting and path integration within and across boundaries. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(1), 220–234. 10.1037/xlm0000032

Mou, W., Li, X., & McNamara, T. P. (2008). Body- and environmental-stabilized processing of spatial knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(2), 415–421. 10.1037/0278-7393.34.2.415

Mou, W., McNamara, T. P., Valiquette, C. M., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1), 142–157. 10.1037/0278-7393.30.1.142

Radvansky, G. A., & Copeland, D. E. (2006). Walking through doorways causes forgetting: Situation models and experienced space. Memory & Cognition, 34(5), 1150–1156. 10.3758/BF03193261

Radvansky, G. A., Tamplin, A. K., & Krawietz, S. A. (2010). Walking through doorways causes forgetting: Environmental integration. Psychonomic Bulletin & Review, 17(6), 900–904. 10.3758/PBR.17.6.900

Riecke, B. E., & McNamara, T. P. (2017). Where you are affects what you can easily imagine: Environmental geometry elicits sensorimotor interference in remote perspective taking. Cognition, 169, 1–14. 10.1016/j.cognition.2017.07.014

Rieser, J. J. (1989). Access to knowledge of spatial structure at novel points of observation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(6), 1157–1165. 10.1037/0278-7393.15.6.1157

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.

Ruddle, R. A., Volkova, E., & Bülthoff, H. H. (2011). Walking improves your cognitive map in environments that are large-scale and large in extentACM Transactions on Computer-Human Interaction, 18(2), 1–20. 10.1145/1970378.1970384

Savelli, F., & Knierim, J. J. (2019). Origin and role of path integration in the cognitive representations of the hippocampus: Computational insights into open questions. The Journal of Experimental Biology, 222(Pt. Suppl. 1), jeb188912. 10.1242/jeb.188912

Shelton, A. L., & Marchette, S. A. (2010). Where do you think you are? Effects of conceptual current position on spatial memory performance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(3), 686–698. 10.1037/a0018713

Shine, J. P., Valdés-Herrera, J. P., Hegarty, M., & Wolbers, T. (2016). The human retrosplenial cortex and thalamus code head direction in a global reference frame. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 36(24), 6371–6381. 10.1523/JNEUROSCI.1268-15.2016

Sholl, M. J., Kenny, R. J., & DellaPorta, K. A. (2006). Allocentric-heading recall and its relation to self-reported sense-of-direction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(3), 516–533. 10.1037/0278-7393.32.3.516

Siegel, Z. D., Kelly, J. W., & Cherep, L. A. (2017). Rescaling of perceived space transfers across virtual environments. Journal of Experimental Psychology: Human Perception and Performance, 43(10), 1805–1814. 10.1037/xhp0000401

Strickrodt, M., Bülthoff, H. H., & Meilinger, T. (2019). Memory for navigable space is flexible and not restricted to exclusive local or global memory units. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(6), 993–1013. 10.1037/xlm0000624

Taube, J. S., Valerio, S., & Yoder, R. M. (2013). Is navigation in virtual reality with FMRI really navigation?Journal of Cognitive Neuroscience, 25(7), 1008–1019. 10.1162/jocn_a_00386

Waller, D., Loomis, J. M., & Haun, D. B. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157–163. 10.3758/BF03206476

Wang, R. F. (2004). Between reality and imagination: When is spatial updating automatic?Perception & Psychophysics, 66(1), 68–76. 10.3758/BF03194862

Wang, R. F. (2016). Building a cognitive map by assembling multiple path integration systems. Psychonomic Bulletin & Review, 23(3), 692–702. 10.3758/s13423-015-0952-y

Wang, R. F., & Brockmole, J. R. (2003). Human navigation in nested environments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(3), 398–404. 10.1037/0278-7393.29.3.398

Warren, W. H., Rothman, D. B., Schnapp, B. H., & Ericson, J. D. (2017). Wormholes in virtual space: From cognitive maps to cognitive graphs. Cognition, 166, 152–163. 10.1016/j.cognition.2017.05.020

Wehner, R., Michel, B., & Antonsen, P. (1996). Visual navigation in insects: Coupling of egocentric and geocentric information. The Journal of Experimental Biology, 199(Pt. 1), 129–140. 10.1242/jeb.199.1.129

Zhang, L., & Mou, W. (2017). Piloting systems reset path integration systems during position estimation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 472–491. 10.1037/xlm0000324

Submitted: December 11, 2020 Revised: May 26, 2021 Accepted: July 22, 2021

Titel:
Developing Global Spatial Memories by One-Shot Across-Boundary Navigation
Autor/in / Beteiligte Person: Lei, Xuehui ; Mou, Weimin
Link:
Zeitschrift: Journal of Experimental Psychology: Learning, Memory, and Cognition, Jg. 48 (2022-06-01), Heft 6, S. 798-812
Veröffentlichung: 2022
Medientyp: academicJournal
ISSN: 0278-7393 (print) ; 1939-1285 (electronic)
DOI: 10.1037/xlm0001083
Schlagwort:
  • Descriptors: Memory Spatial Ability Computer Simulation Simulated Environment College Students Foreign Countries Psychomotor Skills
  • Geographic Terms: Canada
Sonstiges:
  • Nachgewiesen in: ERIC
  • Sprachen: English
  • Language: English
  • Peer Reviewed: Y
  • Page Count: 15
  • Document Type: Journal Articles ; Reports - Research
  • Education Level: Higher Education ; Postsecondary Education
  • Abstractor: As Provided
  • Entry Date: 2023

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -