The match in orientation between verbal context and object accelerates change detection

Resear ch in the field of embodied cognition has shown that sensorimotor simulation significantly influences various aspects of cognitive processing. This experiment was designed to test the impact of the sensorimotor simulation of objects’ physical proprieties, initiated by the preceding verbal context, on change detection performance. Before performing each of change detection trials, participants were exposed to sentences suggesting a particular object orientation (horizontal or vertical). The orientation in the first display of the objects that were to be replaced in the second was also manipulated. Response latencies results show that the sentences implying the same spatial orientation as that of the to-be-changed object led to a faster detection of its change compared to the sentences that implied the mismatching orientation, an effect that we explain in terms of the superior encoding, facilitated by sensorimotor simulation, of the objects with matching orientation.

PSIHOLOGIJA, 2019, Vol. 52(1), [93][94][95][96][97][98][99][100][101][102][103][104][105] Change blindness is the failure to detect an obvious change to a scene when it occurs during a visual disruption (Simons & Rensink, 2005), indifferent on the magnitude of the change or on whether participants are previously informed that a change might occur (Rensink, 2008). It can arise from failures at any of the stages of processing involved in change detection: encoding, retention, retrieval, comparison of the pre-and post-change items, and report Henderson & Hollingworth, 1999). The engagement of focused attention has emerged as an important condition of successful change detection (Rensink, O'Regan, & Clark, 1997;Scholl, 2000). This claim has also received neuropsychological support. Yet, attention is not a sufficient condition for change detection, as can be concluded from the fact that changes to areas attention is focused on are still frequently missed (Simons & Rensink, 2005). Other more specific factors that can facilitate successful change detection have been revealed by various studies, such as participants' expertise in the domain to which the images pertain (Werner &Thies, 2000), task complexity, stimulus On-Time and time for which a scene is obscured (Hewlett & Oezbek, 2012), perceptual load (Lavie, 2006), or the semantic consistency between the scene and the object (Hollingworth & Henderson, 2000).
Studi es building on the dichotomy between the cognitive and the sensorimotor visual systems and the influence of the latter on change detection (Bridgeman, Lewis, Heit, & Nagle, 1979) revealed that hand position is another factor of change detection. Specifically, when participants' hands are near the display, change detection is enhanced, due to the additional engagement of the sensorimotor system Tseng et al., 2010). In turn, this supplementary activation creates a stronger attentional engagement that presumably leads to a better encoding of items in the display and consequently to their superior retention in the visual short-term memory (VSTM).
In the embodied cognition perspective, this facilitatory effect of nearby hands on change detection can be seen as another illustration of the interplay between cognitive processes and action. Previous studies showed that sensorimotor experiences can support various aspects of cognitive functioning processes (Barsalou, 1999). Most of these investigations focused on higher cognitive processes, revealing, for instance, that gestures can activate certain notions (Chandler & Schwarz, 2009), that movements can affect sentence processing (Glenberg & Kaschak, 2002) or that the speed with which people mentally verify the sensor quality of an object depends on its match with the sensory modality of a perceptual signal detected before this conceptual task (van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008). Conversely, conceptual processing has been shown to influence sensorimotor functioning (Chen & Bargh, 1999;Stanfield & Zwaan, 2001), further supporting a perceptual symbols (Barsalou, 1999) account of the interplay between cognition and action. In this view, the perceptual and action patterns are captured from direct experiences with their relevant entities. During offline processing, the mental representation of the respective entities also includes the re-enactment of these perceptual symbols that condense one's past sensorimotor experiences. A set of empirical results that support this thesis reveals the role of sensorimotor simulation in performing various cognitive tasks. For instance, studies on the modality-switch effect (Pecher, Zeelenberg, & Barsalou, 2003;Scerrati et al., 2015) in tasks requiring participants to mentally verify whether certain objects possess physical properties revealed a processing cost in terms of speed and accuracy when the sensorial modalities of the verified proprieties alternate, thus requiring them to change the modality mentally represented and used in the verification task.
Such results suggest that the sensorimotor simulations generated during sentence processing include specific perceptual characteristics of the object, as constrained by the linguistic context of the sentence. So far, studies on the perceptual effects of sentence-induced simulations made use of simple, low perceptual load task, such as propriety verification or sentence-picture verification. Yet, the positive influence of the match between the simulated perceptual features and those of the perceived object on the recognition of the latter brings forth the possibility that perceptual simulation could function as a facilitatory factor in a complex visual task as change detection as well.
The aim of our study was to test this idea by adapting the sentencepicture verification paradigm in the change blindness area. We manipulated a specific perceptual feature of the sensorimotor simulation induced by the preceding sentence, namely orientation, and then used a flicker paradigm in which the changed object initially had the same or a different orientation. This allowed us to examine the effect of the match between the orientation implied by the sentence and that of the target object as presented in the initial display on the speed and accuracy of detecting its replacement in the second. We chose the feature of orientation as target physical property instead of color or size because these last two are also highly salient characteristics of objects that direct the eye gaze (Henderson, 2007;Henderson, 2003;Itti & Koch, 2000). This would render plausible the alternative explanation of change detection effects as being facilitated not by the sensoriomotor simulation of the object but by the greater attentional resources invested in processing it during the first display.

Method Participants
Thirty-three psychology undergraduate students (29 females), aged 18-21 (M = 19.38, SD = 1.74) participated in this 45-minutes experiment. They were given partial course credits for their participation. All had normal or corrected-to-normal visual acuity and were righthanded.
Stimuli. Sixty color images, representing often used objects, were selected from The Bank of Standardized Stimuli BOSS (Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010) and used as object stimuli. Thirty of these images were rotated on their vertical axis to produce 30 experimental items with a vertical orientation, 30 images with horizontal orientation and 30 images with a 45° orientation; other 30 images were rotated to produce only a 45° orientation. The 45° orientated images were used as filler, similar to other studies on change blindness in which half of the trials contain no change (e.g., Barenholtz et al., 2003). Each image was converted to black and white and sized to a maximum 140 pixels on both x and y dimensions. Each subtended a visual angle of 4.58° (horizontal) x 4.58° (vertical) at the viewing distance of approximately 50 cm.
In each display, six stimuli -two horizontal, two vertical, and two with a 45° orientation were randomly placed around an imaginary circle with a radius of 5.15° from the centre of the display to the centre of pictures. A white background was used throughout the trials.
We created 120 sentences to precede the pictures: 60 sentences that would induce the simulation of a specific spatial orientation, out of which 30 implied a vertical orientation (e.g., "The fruits are falling from the tree") and 30 implied a horizontal orientation (e.g., "The car is speeding"), similar to the materials used in Stanfield and Zwaan (2001). We also created 60 filler sentences, which did not imply any spatial relationship between components (e.g., "The sky is clear").

Design and Procedure
We used a 2 x 2 repeated measures design, with simulation-inducing sentence (implying either a horizontal or vertical orientation) and to-be-replaced object orientation (horizontal or vertical) as independent variables. Our dependent variables were participants' response time and accuracy in detecting changes across displays. Yet, in order to avoid inducing the expectation that all trials include a change, we introduced no change across displays (i.e., filler trials) in half of the trials. Our aim was to investigate the effect of sensorimotor simulation on change detection parameters, thus we used simulation-inducing sentences only in the trials that included a change across displays. In the other (i.e., filler) trials we used sentences from the filler category instead of sentences inducing spatial simulation.. Figure 1 shows a schematic illustration of an experimental trial. Each trial began with a 1-second display of white background; then a sentence appeared for 5 seconds, followed by a blank interstimulus interval (ISI) for 800 ms. A fixation cross was subsequently shown at the center of the display for 1 second, prompting the participant to gaze at the center, followed by a 1 second ISI. The 6 stimuli placed around an imaginary circle (A) appeared for 250 ms, followed by a 250 ms ISI, then the second display of the 6 stimuli placed around an imaginary circle (A') appeared for 250 ms. In half of the trials, displays were identical to one another (no-change), and in the other half the second display differed only in that one item was replaced by another during the ISI. The replacement could occur at any random location. The change detection sequence cycled through two displays separated by a blank interval until a response was made or 10 repetitions. Ethics approval for the present study was given by the University Research Ethics Committee and informed consent was obtained from all the participants. At the beginning of the study we informed each observer that he/she was going to participate in an experiment related to visual attention and that the task was to read each sentence carefully and then to respond as soon as possible if a change occurs. In each trial, the observers task was to press the "L" key if he (she) detected a change and the "A" key if he (she) did not detect a change. As soon as the key was pressed, the trial cycle stopped and a new trial began. Participants were told that reaction times and accuracy were being measured and that it was important for them to make decisions about the pictures as quickly and accurately as possible. They were also told that at the end of the experiment, they would be asked to recall some of the sentences they had just read, so they should be sure to read each sentence carefully.
Each sentence was randomly selected from a list of 120 sentences, and could appear only once. Depending on the sentence category (vertical, horizontal, or filler), the type of change was set. When the sentence displayed was a filler, there was no change in displays. When the sentence implied a vertical orientation, the change could occur either at a vertical item (creating a match in orientation between the simulation sentence and the to-be-changed item) in up to 15 trials, or at a horizontal item (creating a mismatch effect between simulation and item), in up to 15 trials. The same procedure of creating 15 matching and 15 mismatching trials was used for the sentences implying a horizontal orientation.
There were 15 trials for each of the two types of objects replaced, as defined by their orientation, preceded by each of the two types of sentences, plus 60 filler trials, resulting in a total of 15 trials x 2 simulation sentences x 2 item orientation + 60 filler trials = 120 trials. Trials were completely randomized for each participant. We also used 5 practice trials in the beginning of the experiment to familiarize participants with the procedure. Stimuli were presented on a 19-inch monitor using a PC with a 60 Hz refresh rate, and observers were seated approximately 50 cm from the monitor. The experimental software used was Inquisit 4 (2014).

Results
Before analyses were carried out, data from 3 outlier participants with the mean response time (RT) above or below 3 standard deviations from the mean or with an accuracy of detecting change below 70% were eliminated. The trials on which subjects misidentified if the trial is a change or no-change type were not included in RT analyses.
Detection time was measured from the onset of the second display to key press response. Mean accuracy and RT per condition were computed for each participant.
Correct response times from change trials were entered into a repeated measures analysis of variance (ANOVA). There was no main effect of simulation sentence (F(1, 29) = 0.02, p = .89, η p 2 = .001), nor a main effect of object orientation (F(1, 29) = 0.013, p = .91, η p 2 < .001). There was a significant interaction between simulation sentences and object orientation, F(1, 29) = 14.24, p = .001, η p 2 = .33 (see Figure 2). Participants detected the replacement of the horizontal objects in the first display significantly quicker (t(29) = -2.27, p = .03) when they were preceded by a horizontal simulation sentence (M = 1882.57, SD = 1006.34) than when they were preceded by a vertical simulation sentence (M = 2117.82, SD = 1044.58); conversely, participants detected significantly faster (t(29) = -2.02, p = .05) the replacement of vertical objects when they were preceded by a vertical simulation sentence (M = 1886.65, SD = 876.69) than when preceded by a horizontal simulation sentence (M = 2097.92, SD = 1096.32). Accuracy from the change detection task was entered into a repeated measures analysis of variance (ANOVA). There was no main effect of simulation sentence (F(1, 29) = .64, p = .43, η p 2 = .02), nor a main effect of object orientation (F(1, 29) = 1.34, p = .264, η p 2 = .04). There was no significant interaction between simulation sentence and object orientation (F(1, 29) = 0.53, p = .47, η p 2 = .02). We also examined the difference in reaction times between the matching and the non-matching conditions, similar to the data analysis approach frequently used in studies employing the sentence-picture verification paradigm (e.g., Zwaan & Pecher, 2012;Stanfield & Zwaan, 2001). To this aim, we combined correct reaction times from the horizontal sentence/horizontal object and vertical sentence/vertical object conditions, thus creating the "matching" general condition, while the horizontal sentence/vertical object and vertical sentence/ horizontal object conditions were combined to create the "mismatching" condition. Correct reaction times emerged as significantly lower (t(29)

Control Study
In order to further test the facilitating effect of the orientation implied by the preceding sentence on change detection speed, we ran a control study in which participants were administered a flicker task with the same pictorial stimuli, but preceded by sentences that did not imply any spatial orientation. This allowed us to compare the reaction times in the main experiment, which we hypothesize to be influenced by the sensoriomotor simulation of the object as induced by the preceding verbal context, with those in this control situation.
Participants, stimuli, and design. Four participants (two females), aged 22-30 (M = 25.50, SD = 3.69) were included in this control study. We used the same image displays as in the main experiment, preceded in each trial by a sentence randomly selected from a set of 600 sentences that did not imply any spatial relationship between components. 60 of these sentences were those used as filler sentences in the main experiment and 540 were newly created (e.g., "The law is changed"). There were 600 trials, out of which 300 trials contained no change in the displays. In half of the change trials an object placed horizontally in the first display was replaced, and the other half involved the replacement of vertical items.

Results.
Participants' mean of correct response times in change trials containing the replacement of horizontal objects was 2281.23 (SD = 255.73). We used One Sample t test to compare this mean with the latencies of participants in the main experiment in the two conditions of replacement of horizontal objects. The mean correct response time in detecting the replacement of the horizontal objects when preceded by a horizontal simulation sentence (M = 1882.57, SD = 1006.34) emerged as significantly shorter (t(29) = 2.17, p = .04) than the corresponding mean reaction time in the control study, but there was no significant difference between the latter and the mean correct response time in the vertical sentence/horizontal object condition (t(29) = 0.86, p = .4).
In control change trials in which vertical objects were replaced, participants' mean correct response time was M = 2240.20 (SD = 275.96). This was significantly longer than the mean correct response time in detecting the replacement of vertical objects when preceded by a vertical simulation sentence (M = 1886.65, SD = 876.69) in the main experiment (t(29) = 2.21, p = .03), but not significantly different from the mean correct response time in the horizontal sentence/vertical object condition (t(29) = 0.71, p = .5). Thus, in both matching conditions in the main experiment (horizontal sentence/horizontal object and vertical sentence/ vertical object), mean correct response time was significantly shorter than the mean response time in detecting the replacement of objects with the corresponding orientation (horizontal or vertical, respectively) in the control study. 1 We also found that correct response times of the experimental group in the matching condition (combining the horizontal sentence/horizontal object and the vertical sentence/vertical object conditions) were marginally shorter those of the control group in a between-groups comparison (t(32) = 1.79, p = .09). This again indicated a general increase of change detection speed by sentences matching the orientation of the replaced object. In the mismatching condition, experimental participants' mean correct response time was not significantly different from that of participants from the control group (t(32) = 0.67, p = .51).

Discussion and Conclusion
We investigated the effect of a sensorimotor simulation of objects' orientation implied by the preceding verbal context on change detection, through a modified sentence-picture verification paradigm designed to assess change detection. Our results show that the sentence-induced perceptual simulation of objects that are replaced across displays functions as a facilitating factor of the speed with which the change is detected. Sentences implying the same spatial orientation as that of the to-be-changed object as perceived in the first display led to a faster detection of its change compared to sentences that implied the other, mismatching orientation, and also compared to sentences implying no spatial orientation, as was indicated by results of the control study. These differences can be conceived as being determined by the effect of simulation on the retention in the VSTM of objects presented in the same orientation as that implied by the preceding sentence. In turn, this effect could be mediated by the superior encoding of these objects in comparison to that of objects with mismatching orientation. More specifically, the encoding advantage induced by simulation first consists in the faster identification of objects with sentence-display matching orientation, as demonstrated by a series of studies in the sentence-picture verification paradigm (Stanfield & Zwaan, 2001;Zwaan & Pecher, 2012).
The acceleration of identification can be conceived as a perceptual priming effect, grounded in the perceptual nature of simulation (Barsalou, 1999): the simulated orientation, as induced by the preceding sentence, functions as a prime for the actual orientation of the object, thus accelerating its processing and consequent identification when the two types of orientations match. As Hirschfeld & Zwitserlood (2011) show in their investigation using the sentence-picture verification task, the top-down processing, from verbally implied physical characteristics to the perceptual processing of physical features, includes a feedback mechanism driven by low-spatial frequencies.
More specifically, the effect of the match between verbal and physical layers disappeared when low-spatial frequencies were removed from pictures of target objects, indicating a perceptual mechanism of the match effect: the linguistic context primes subsequent object recognition by generating initial guesses about visual properties of the object, and these guesses are then confronted with the information extracted from a fast initial analysis of low-spatial frequencies of the object perceived. Such results bring support to recent perspectives on actual influences of the perceptual-symbol system on perception that move beyond describing simulation only as a passive parallel reenactment of the object and hypothesize instead that sensoriomotor simulations also generate predictions that guide visual processing (Barsalou, 2009).
Another encoding advantage of objects whose sentence-first display orientations match that might explain the shorter response latencies in detecting their change is their superior memory retention. First, their faster identification also offers a longer time to consolidate their identity in memory during the first display. Second, previous studies highlight the participation of verballyinduced sensorimotor simulations in concept representation (Pecher, Zanolie, & Zeelenberg, 2007). In this context, it can be hypothesized that the superior retention, at least in the VSTM, brought by simulating a certain physical feature, such as a specific orientation, extends from the verbal context that induced the simulation to objects with the same feature.
The superior encoding of objects perceived in the first display that match in orientation with the preceding sentence represents a top-down mechanism that is similar to others revealed as significantly influencing the speed with which people detect changes. Studies on the detection of changes in global scenes showed that this speed depends on the semantic relationship of the changed object to the global meaning of the scene, which is quickly decoded in the first display. Specifically, changes to items that are important for the experts' interpretation of a scene (Werner &Thies, 2000), to elements concerning its main theme (Kevin O' Regan, Deubel, Clark, & Rensink, 2000) or to scene-inconsistent items (Hollingworth & Henderson, 2000) are detected faster, due to their top-down enhancement of their encoding in the first display and salience of change in the second. Moreover, a mechanism posited to mediate the higher sensitivity to semantically relevant changes resides in the detailed expectations about the scene -relevant elements that are generated in this top-down processing (Werner & Thies, 2000). This further parallels the model of effects of verbally-induced simulation on change detection that we put forth above, which also includes predictions that guide visual processing. Both top-down mechanisms start at the cognitive level, represented in our explanatory model by the semantic equivalent of the physical property that is subsequently simulated. The difference between them is that while the influence of scene meaning on perceptual processing is direct, the model of change detection in the sentence-picture paradigm includes sensorimotor simulation as mediator.
A limitation of our study is that our manipulation of sensorimotor simulation did not include a control condition. This is due to the fact that we applied the sentence-picture verification paradigm in manipulating the content of the simulation, which, as it has been used so far, only involves the use of sentences inducing a certain type of simulation, with no control condition (Stanfield & Zwaan, 2001;Zwaan & Pecher, 2012). As in our analyses, the effect of the simulation is highlighted through the comparison between various types of sentences defined by this variable.
To conclude, sensorimotor simulations of spatial orientation accelerate the detection of changes applied to objects presented in the same orientation, due to their faster identification guided by visual predictions and to their superior retention. This top-down mechanism confers an encoding advantage to objects with sentence-picture orientation match and leads to a positive effect of simulation on the maintenance of the object in VSTM between displays. Further studies should test this change detection acceleration effect in the case of other visual properties, such as shape or color, simulation of which has been revealed by embodied cognition research as significantly influencing cognitive processes.