Is Syntactic Working Memory Language Specific ?

One question that has emerged from recent studies on sentence processing pertains to the nature of a specific cognitive mechanism implicated in maintenance of unintegrated syntactic information in ongoing sentence processing. In addition to evidence from language, recent research on musical syntax has suggested that processing of musical sequences may require a similar cognitive mechanism. In this paper evidence is discussed for the implication of syntactic working memory (SWM) in processing of language and musical syntax, arithmetic sequences, as well as in complex motor movements used with a specific expressive purpose. The idea is that an anticipatory structure-building component governs interpretation in each of these domains by processing relevant integrations within sequences of structurally dependent elements. The concept of SWM is anchored in representational modularity and the shared syntactic integration resources hypothesis, and empirically supported by neurophysiological and neuroimaging evidence.

Most proposals on the role of WM in sentence processing that are based on Baddeley & Hitch's model (1974) and its subsequent versions 1 (Baddeley, 1986(Baddeley, , 1995(Baddeley, , 2000) ) have focused on the phonological loop, but without reaching the consensus on how exactly this component contributes to sentence processing (Saffran & Martin,1990;Vallar, Basso, & Bottini,1990).Since the phonological loop consists of phonological storage and subvocal rehearsal, it is not clear how it deals with semantic and syntactic aspects of words during sentence processing (Martin & Romani, 1994).Theories of sentence processing emphasize the importance of syntactic and semantic structuring of word sequences in interpretation (Clifton & Frazier, 1990;Gibson, 1998), that is, building a grammatical structure according to the rules of grammar, from which a message or intended meaning of a sentence can be inferred (Kempen, 1999).
Although researchers agree that processing of complex syntax critically depends on the available WM resources, they do not agree on the nature of such resources.Some of them, inspired by the work of Just and Carpenter (1992), argue for a single-pool concept of WM resources, while others gravitate toward a multi-components concept of WM, where different components control different aspects of sentence processing (Martin & Romani, 1994;Martin, 1995;Caplan & Waters, 1999;Crosson et al., 1999, among others).Among the latter, several attempts to elaborate Baddeley & Hitch's (1974) model were made in the 1990s, proposing that verbal WM fractionates according to the types of linguistic information, such as syntactic (Martin & Romani, 1994;Lewis, 1999), semantic (Martin & Romani, 1994), phonological and orthographic WM (Crosson et al., 1999), or according to the types of verbal tasks, such as interpretive vs. postinterpretive processes (Caplan & Waters, 1999).
Even when they do not agree on the theoretical framework within which they conduct their empirical work, researchers on sentence processing usually 1 Baddeley & Hitch's (1974) three-component model of WM consisted of the phonological loop, the central executive, and the visuo-spatial sketchpad, but an additional component, the episodic buffer, has recently been added to it (Baddeley, 2003).The first component stores and processes verbal information, the second visual information, while the third component is a "controlling attentional", which "supervises and coordinates" the first two components (Baddeley, 1990, p. 71).The role of the episodic buffer in the revised model is to provide an interface between the WM subsystems and long-term memory (Baddeley, 2003).Thus, in this functional architecture of memory, WM system is functionally open to any type of information that needs to be stored for a brief period of time.
agree that certain syntactic relations (e.g., dependencies) and operations (e.g., syntactic displacement) are computationally more demanding than other syntactic relations and operations.For instance, syntactically different structures presented in (1)-( 3) below are formed by a syntactic displacement operation known as wh-movement, in which a wh-constituent (who, what) moves from its original position in D-structure, leaving a co-indexed trace (t) behind.
( In order to interpret any wh-structure, a speaker/ hearer needs to establish a link, i.e. a chain between the moved wh-phrase and its original position, marked by a trace.Since the trace left in place of the moved element and the wh-constituent whom i /what i refer to the same entity (co-referring is indicated by the index i), whom/ what is considered the antecedent of t t i .Together they constitute a chain (e.g., whom i, t i ), whose head is whom and the tail is the whtrace.Critical in sentence interpretation is understanding of "who is doing what to whom", that is assignment of thematic roles.Since thematic roles are actually assigned to positions in a sentence, a moved wh-phrase can receive a thematic role only via its trace.Thus, the role of a chain is to enable transfer of thematic roles from the trace to the moved element.
Syntactic movement of wh-constituents in the above examples requires the human sentence processor to hold the displaced element in WM (who) until it encounters its subcategorzer (follow ( ( ) or until it is reconstructed in its canonical position marked by a trace (t) (Felser, Clahsen, & Münte, 2003).In other words, comprehension of complex syntax requires that contents of intermediate representations be temporarily stored so that different processes can operate on them.In addition to trace, which is a phonologically empty placeholder of a moved element, these intermediate syntactic representations can contain other abstract elements necessary for establishing syntactic hierarchical and dependency relations.
In SVO (Subject-Verb-Object) languages, such as English, comprehending a declarative sentence with canonical SVO word order, as in (4), poses less computational demand than comprehension of an interrogative sentence, as in (5): Mary likes White Teeth.Which book does Mary like k t?
Since (4) has a canonical SVO word order (subject-verb-object), a direct object (White Teeth) immediately follows the verb like.However, (5), being a question with a fronted wh-object, has a noncanonical structure.It thus places a greater computational demand on the parser because readers/ hearers cannot assign grammatical interpretations to the fronted wh-expression until they establish a syntactic dependency relation between which book and its trace, nor can they assign a thematic role to which book until they encounter the verb like.Thus, the processing cost depends on the distance between the dislocated element (which book), also called k k filler (i.e., the antecedent), and its subcategorizing verb r (like), as well as on the distance between the filler and a gap (i.e., the trace left behind) 2 .The process of forming an association between a filler and a gap is called gap filling (Pickering & Barry, 1991).The two relationships, filler-gap and filler-subcategorizer, are critical in sentence processing.
The nature of the relationship between the filler and its subcategorizer (verb or preposition) has provoked an ongoing debate in the literature (e.g.Pickering & Barry, 1991;Gibson & Hickok, 1993;Gorrell, 1993;Pickering, 1993;Fiebach, Schlesewsky, & Friederici, 2001, 2002), resulting in two models of sentence processing.The representatives of the Empty Category Model of sentence processing claim that the association between the moved constituent and its subcategorizer crucially depends on the extraction site (Gibson & Hickok, 1993;Gorrell, 1993), while the advocates of the Direct Association Hypothesis (DAH) argue against empty categories in human sentence processing, claiming that the moved constituent is directly associated with the subcategorizer (Pickering & Barry, 1991;Pickering, 1993).What is common to both models is the assumption that processing complex syntactic structures involves a significant amount of computational load, imposing great demands on WM.
Several interesting proposals on the role of WM in syntactic processing have been developed by the representatives of the Empty Category Model.The main goal of the present paper is to evaluate the concept of syntactic working memory (SWM) that has emerged from such proposals (Gibson, 1998;Fiebach et al., 2002), focusing on its cognitive status and neural underpinnings.More specifically, evidence is presented indicating that SWM is a wider concept, whose role extends beyond the domain of sentence processing to include other domains, such as music perception and arithmetics, and which subsumes other types of syntactic integrations within sequences of structurally dependent elements.

FROM SYNTACTIC PREDICTION TO INTERPRETATION
Linguists typically use the term syntax to refer to structural principles of a particular human language, such as those pertaining to noun phrases (NPs), verb phrases (VPs), and other sentential elements.Another use of this term is rather generic, referring to organization of any formal system, be it a computer 2 In addition to the processing load created by these two relationships, the nature of material that intervenes between the filler and its gap/ subcategorizing verb may also contribute to the processing load.language, logical language, or simply piece of music (Jackendoff, 1997;Patel, Gibson, Ratner, Besson, & Holcomb, 1998;Maess, Koelsch, Gunter, & Friederici, 2001;Koelsch, 2006;Patel, 2008).Even when it is used in the first, narrow sense syntax is not merely "about sentences" of a particular language; it is also about specific principles that determine how discrete structural elements combine into sequences and about the abstract syntactic forms that represent them.Linguistic theories postulate different types of abstract syntactic forms and principles governing the grammar.Although syntax is not a monolithic task (Poeppel & Embick, 2005) and thus it is difficult to understand how the brain processes syntactic structures, there is growing neurocognitive evidence on various aspects of comprehension of complex syntax, including the evidence on the role of WM in it.
In this section, evidence is presented for the implication of SWM in syntactic processing of language, music, and arithmetic sequences, as well as in complex motor movements used with a specific expressive purpose.The idea is that an anticipatory structure-building component governs interpretation in each of these domains and that this type of processing is supported by SWM.

Language
The Syntactic Prediction Locality Theory (SPLT) proposed by Gibson (1998) focuses on the relationship between the parser (sentence processing mechanism) and the available computational resources provided by WM.The theory is based on the idea that the processing cost consists of two separate components: a memory cost and an integration cost.The memory cost component determines the amount of computational resources needed to store a "partial sentential input", while the integration cost component determines how much of the computational resources needs to be "spent on integrating new words into the structures built thus far" (Gibson, 1998, p. 8).In other words, the memory cost depends on the number of syntactic predictions about the lexical categories that have to be held in WM, while the integration cost depends on the number of new discourse referents.The key constraint on both memory cost and integration cost in Gibson's proposal is locality: the syntactic predictions held in memory for a longer period of time and integrations over longer distances are more expensive.Thus, the parser critically depends on WM processes for establishing the fillergap relation.In sentences such as Which zebra did the giraffe kick?, temporary storage of a moved constituent, its retrieval from WM, and its integration with the subcategorizing verb all contribute to the processing cost (Fiebach et al., 2001;Fiebach et al., 2002;Felser, Clahsen, & Münte, 2003).
Pursuing this line of reasoning, Fiebach and colleagues (Fiebach et al. 2001(Fiebach et al. , 2002;;Fiebach, Schlesewsky, Lohmann, von Cramon, & Friederici, 2005) conducted event-related brain potential (ERP) and functional magnetic resonance imaging (fMRI) experiments on comprehension of embedded wh-questions in German.Unlike English, which is an SVO language, German is a verb-final language (SOV).In English, a subcategorizing verb is followed by a direct object gap, and the thematic role of a moved element cannot be established before a dependency relation between the moved element and its subcategorizing verb is established.This makes it difficult to determine in English whether the processing cost depends more on the relation between the subcategorizer and the moved element, or on the link between the moved element and its trace.This is not the case with SOV languages like German where the subcategorizing verb occupies the sentence final position.According to the Active Filler Strategy (Frazier & Clifton, 1989), the parser will establish syntactic dependencies as early as possible, which in the case of SOV languages means that the parser is able to establish the dependency relation between the filler and its gap before the verb is encountered.Since German marks grammatical functions of NPs by overt case endings, the filler-gap relation can be incrementally established before the subcategorizing verb is processed, allowing preliminary assignments of thematic roles (Fiebach et al., 2002(Fiebach et al., , 2005;;Felser et al., 2003) 3 .This makes it possible to explore the dissociation between the memory cost (which depends on the relationship between the displaced constituent and its trace) and the integration cost (which depends on the relationship between the displaced element and its subcategorizer) in German, but not in English.
More specifically, Fiebach et al. (2002) found that case-unambiguous embedded German object wh-questions with a longer distance between the antecedent and its trace elicited a specific ERP effect as a consequence of greater computational load in its processing, a left anterior negativity (LAN).Since the same type of object questions, only with a shorter distance, and the subject questions did not elicit a similar response, it follows that the LAN is the effect of processing a very specific filler-gap distance.The authors emphasize that the LAN reflects the memory cost of keeping the moved wh-constituent over a longer distance in WM.Another processing step -integration of the moved element into a phrase structure representation -elicited another ERP component, a late positivity P600 at the second NP in short and long object wh-questions.The P600 here reflects difficulty with syntactic integration of object structures.Similarly, their fMRI study has shown that Broca's area "houses mechanisms that enable the sentence processor to keep syntactic information actively available over sustained periods of sentences while new linguistic information is being processed continuously" (Fiebach et al., 2005, p. 88).Their fMRI findings are in accord with the growing body of neuroimaging evidence suggesting the critical role of Broca's area in syntactic processing (Just, Carpenter,, Keller, Eddy, & Thulborn, 1996;Stromswold, Caplan, Alpert, & Rauch,1996;Embick, Marantz, Miyashita, O'Neil, & Sakai, 2000;Caplan, Alpert, Waters, & Olivieri, 2000;Caplan, Vijayan, Kuperberg, West, Waters et al., 2001;Ben-Shachar, Plati, & Grodzinsky, 2003).
In summary, Fiebach and colleagues have shown that the processing costs in embedded short vs. long subject vs. object German wh-questions differ.Since LAN and the positive-going ERP effect are two different ERP components, they indicate that two different WM mechanisms are involved in processing of these structures: syntactic memory costs and syntactic integration costs, respectively (Fiebach et al., 2002).Felser et al. (2003) took a step further in exploring the two types of processing costs by trying to resolve whether the memory cost is determined linearly or hierarchically, that is, whether the distance depends on the number of words that intervene between the antecedent and the trace, or on the hierarchical organization of the phrase structure.In order to determine if the distance is linear-or structure-dependent, Felser et al. (2003) compared the ERPs elicited by the processing of three types of structures: wh-object movement, long-object topicalization, and raising and object topicalization, plus a control structure consisting of short object topicalization.Their results support the claim that the LAN reflects the memory cost, and the P600 reflects the integration cost.Additionally, they found that the memory cost, unlike the integration cost, was not influenced by the type of dependency (e.g., wh-movement vs. topicalization) that holds between the filler and the gap.However, it depended on the syntactic complexity of the elements intervening between the filler and the gap.For example, raising and object topicalization constructions are syntactically more complex than wh-movement or object topicalization constructions, because they require a clausal complement.This complement intervenes between the filler and its subcategorizer as an additional syntactic level (Felser et al., 2003).Thus, the structural complexity of the sentence is increased by a whole new syntactic level, which differs from an increase in complexity at the same syntactic level, for instance by adding more elements to a phrase structure.Similarly, Ueno & Garnsey (2008) have shown that processing of relative clauses in another SOV language -Japanese, revealed extra processing cost for object over subject relative clauses when reading times and ERPs were measured.Since in Japanese the subject and object gap both appear adjacent to the wh-filler, this finding also indicates the critical impact of structural complexity on the processing cost.
The ERP evidence on processing of wh-dependencies in German discussed so far suggests that the relationship between the dislocated element and its trace can be established before the verb is encountered.Other evidence from neurologically intact (Bornkessel et al., 2003) and impaired populations (Kljajević & Murasugi, in press) also suggests that overt case marking allows assignment of thematic roles directly, that is without the mediation of a verb or other syntactic information.In other words, integration of syntactically relevant information that allows making predictions on a syntactic structure of unfolding sentence does not happen only over a distance, as between a moved element and its trace, or between the moved element and its subcategorizer.It also occurs within the specific elements that make up sentential constituents.Consider the Serbian translation of example ( 5), which is presented in ( 6): (6) Koju knjigu Mary voli t?
The moved wh-word koju contains an ending -u, which conveys the information about the morphological case (Accusative), gender (female), and number (singular).Since syntactic dependencies are established as soon as possible grammatically (Frazier & Clifton, 1989), the parser employs this information to deduce a grammatical function of the moved word koju (direct object) and predict its thematic role (Patient/ Theme).Thus, overt case marking in Serbian indicates grammatical functions, which allows an almost immediate temporary assignment of thematic roles and thus faster predictions on the upcoming syntactic structure (in ( 6) it can take place at the position of -u in koju j ) ).Since this is not the case in English, which relies exclusively on word order to assign thematic roles, we would expect the memory cost of processing structures such as (6) to be smaller than the processing cost of its English counterpart.More importantly, extracting information from -u in koj-u and using that information to predict, build, and interpret a syntactic structure requires both temporary storage of these transient forms as well as extremely rapid integration processes as the sentence unfolds in time.The modularity of these transient syntactic forms can be explained in terms of their time span; in other words, "a module is only 'modular' with respect to a given time span because ultimately the information from various sources usually comes together" (Keller, Carpenter, & Just, 2001).
To summarize, construction of a partial syntactic representation of a minimal grammatical clause by predicting the proper sequencing of its elements (Gibson, 1998) and/ or by integrating the available structural information regardless of the level of processing (Fiebach & Schubotz, 2006) critically depends on the available SWM resources.We now turn to the evidence indicating that the same WM mechanism that is implicated in syntactic processing in language may be involved in syntactically complex sequential processing in other domains.

Music and math
Given that syntax in general is a set of structural rules that apply to a specific type of discrete elements, it is plausible to assume that structural integration of musical sequences also requires SWM.This is not to claim that language and music are not domain-specific; rather, they seem to share WM resources.More specifically, evidence that music and language are modular (Jackendoff, 1995) indicates that both domains involve "distinct cortical representations and that either domain can be impaired selectively" (Schellenberger & Peretz, 2007, p. 47).The latter is exemplified by double dissociations: there are cases of amusia (acquired music deficit) without aphasia (acquired language deficit) (Piccirilli, Sciarma & Luzzi, 2000), and aphasia without amusia (Signoret, van Eckhout, Poncet & Castaigne, 1987).Overall, such evidence indicates that language and musical processing are independent (Hébert, Peretz, & Racette, 2008).
Yet, a cognitive task that is domain specific (e.g., gap filling in language processing) may share processing resources with another type of cognitive task that is also domain-specific (e.g., comprehension of harmonic relations in musical syntax) (Patel, 2003).In other words, although knowledge of linguistic syntax differs from the knowledge of musical syntax and even though these respective sets of rules build two different types of domain-specific representations (linguistic vs. musical), there are aspects of the two syntactic systems that seem to share some processing resources (Patel, 2007).
An interesting proposal on shared cognitive resources in syntactic processing in language and music has been formulated as a "shared syntactic integration resources hypothesis" (SSIRH) (Patel, 2003(Patel, , 2007(Patel, , 2008)).The hypothesis is based on two principles: (i) language and music form domainspecific representations, and (ii) similar cognitive processes that operate on representations belonging to different domains share the neural resources.Furthermore, SSIRH postulates that domain-specific representations are stored in long-term memory.In language, the lexicon obviously needs to be stored in LTM.The same holds for the rules and principles of syntax.However, since syntactic representations are transient, intermediate structures that are being built in extremely rapid, automatic, integrating processes as sentence unfolds in time, they cannot be stored in LTM, as SSIRH postulates; rather, they are supported by some type of WM.Thus, activation of structural knowledge that is domain specific would take place in LTM (words, rules of syntax), providing input for various rapid WM processes taking place in different aspects of sentence processing.SWM could be construed as an interface between LTM and these rapid WM processes, specifically directed to making structural predictions in building representations that will be used in deriving interpretations of continuous structural input.An attempt to apply the same principle to music reveals that, first of all, a "monolithic account" of the nature of musical representation and structure (Swinney & Love, 1998) has not been developed yet.Still, if current research on musical syntax is correct in postulating transient syntactic forms in processing of musical sequences, one would expect such processing to employ SWM as well.
According to the second principle of the SSIRH, the cognitive resources that are shared among domains share the neural resources.Previous research has shown that general capacity resources may support processing without supporting "storage of the output of processes" (Martin & Romani, 1994, p. 522) and that two or more domains may compete for the same neural space (Bates, 2001;Keller et al., 2001).An example of mechanism shared by structural processing in language and music is a late positive-going ERP waveform, the P600 (Patel et al., 1998).This "syntactic positive shift" has repeatedly been associated with syntactic errors, errors in patterned sequences in language, or with a difficult integration of meaningful sequences (Osterhout et al., 2007).Patel et al. (1998) set out to investigate whether processing of syntactically incongruous structures would elicit the P600 ERP component in music in addition to language, thereby directly testing the hypothesis on shared SWM resources.The congruity of language stimuli was manipulated according to the principles of phrase structure, while the congruity of musical stimuli was manipulated according to the principles of harmony and key-relatedness.They found that both syntactically incongruous language and musical structures elicited the P600, suggesting that "the P600 reflects processes of knowledge-based structural integration" (p.727).In an unpublished multimodal study of processing of in-key vs. distantkey music sequences, Venkatraman, Ambrose and Kljajević (2008) found MEG and fMRI evidence for the involvement of the insula for the in-key condition, while the distant-key condition was associated with more activation in Broca's area.The distant-key condition was also associated with a P600 effect in ERP and with an effect around 600 ms found in the MEG data.This evidence also supports the claim that processing of structurally more demanding sequences in music, such as distant-key sequences, elicits effects similar to those reported in language studies.
Since processing of ill-formed structures (Patel et al., 1998) may involve additional processes, such as detection of error, reanalysis and revision, in another study on shared cognitive resources for structural processing in language and music Fedorenko, Patel, Casasanto, Winawer, and Gibson (2009) tested well-formed structures of varying degrees of linguistic and musical structural integration difficulty, which were presented as sung sentences.They found that when musical integrations were more difficult (as in out-of-key vs. in-key condition) the difference in comprehension of subject-and object-relative clauses was larger.Further evidence showing that linguistic processing interacts with arithmetic but not with spatial integration processes (Kennedy and Small, 2002;Fedorenko, Gibson, and Rohde, 2007) also supports the idea that that syntactic/ sequential processes in language, music and arithmetics share cognitive resources for structural integration.
In addition, Broca's area (BA 44 and BA 45) has been linked with tasks involving both linguistic and musical syntax.There are numerous fMRI (Just et al., 1996;Embick et al., 2000;Caplan et al., 2001;Ben-Shachar et al., 2003) and PET (Stromswold et al., 1996;Caplan et al., 2000) correlational studies as well as transcranial magnetic stimulation (TMS) evidence (Sakai et al., 2002) and data from human lesion-deficit studies that suggest a causal link between l Broca's area and processing of syntax in language.Similarly, activation in Broca's area in studies on musical syntax has been repeatedly demonstrated (Maess et al., 2001;Koelsch, 2006;Patel & Iversen, 2008).Curiously, the same area is also involved in perception of meaningful representations of sequences of actions (Fadiga, Craighero, Desto, Finos, Cotillon-Williams et al., 2006;Fadiga & Craighero, 2006;Fiebach & Schubotz, 2006).What is common to these different types of syntactic and sequential processing is structural integration based on prediction that is not a random operation, but follows domain-specific syntagmatic combinatorial rules.The fact that the same brain region is activated by different types of structural integration indicates that domain specificity does not extend to the cognitive resources involved.At the same time, activation of the same brain area by structural integration tasks from different domains does not necessarily imply the same cognitive function.Thus, future research will need to provide more evidence for a causal link between the syntactic integration processes and the implicated brain areas (as in Sakai et al., 2002).
THE COGNITIVE STATUS OF SYNTACTIC WORKING MEMORY Fiebach et al. (2001) claim that "...there exists a separate cognitive or neural resource that supports syntactic working memory processes necessary for the temporary maintenance of syntactic information for the parser" and that "syntactic working memory, rather than syntactic processing per se, is supported by Broca's area" (Fiebach et al., 2001, p. 321).
Recently Grewe et al. (2005) proposed that a specific portion of Broca's area -the pars opercularis (BA 44) -may support interface functions "which integrate several information types to evaluate potential sequential ordering" (p.188).In developing his ideas on representational modularity, Jackendoff (2000) proposed that in a case where two integrative modules (e.g., language and music) compete for a neural area, "Such an area would be in a position to detect and record fortuitous coincidences in pattern between two modules that are capable of using it.It might therefore be possible for it to develop into an interface processor without having any special use for that purpose wired into it from the start."(Jackendoff, 2000, p. 22) Jackendoff's (2000, 2002) idea that such neural space would function as an "interface module" opens up conceptual space in which to further define SWM.
Although the idea of SWM as a specific memory mechanism originated in studies on gap filling in comprehension of complex sentences, it has since extended its role beyond sentence processing to include music, arithmetics, and perhaps complex movements with expressive purpose (Fiebach & Schubotz, 2006).In contrast to the classical modules (Fodor, 1983), the interface modules are not restricted to one domain, which enables their performance of tasks that require simultaneous involvement of two cognitive domains (Jackendoff, 1997(Jackendoff, , 2000(Jackendoff, , 2002)).Thus construed, the concept of interface modules that is emerging from recent, more flexible proposals on cognitive architecture (Jackendoff, 2000(Jackendoff, , 2002;;Bennardo, 2003) seems to answer the question of how different cognitive domains engaged in processing of a single task communicate information among themselves, resulting in successful performance of that task (van der Zee & Nikanne, 2000;Jackendoff, 1997Jackendoff, , 2000Jackendoff, , 2002;;Platzack, 2000;Slack, 2000).At the same time, it allows better understanding of the types of processing that share resources.
Since language is neither a unified nor isolated phenomenon, it requires both intra-and intermodular connectedness.Intramodular connectedness is communication of information within the domain of language, while intermodular connectedness assumes communication of information with other integrative modules, such as memory, attention, or vision.Unlike integrative modules, SWM is an interface module that enables processes of storage and manipulation of intermediate syntactic representations while building full representations.Such an interface would take an already constructed intermediate representation (i.e., structural prediction) as input and apply the processes that constrain its integration according to the rules of a particular syntax.Given the representational modularity hypothesis and the available across-domain syntactic processing evidence, one could argue that SWM is a modality-independent information binding mechanism that communicates information on the structural dependencies within sequences enabling meaningful representations in cognition, perception, and perhaps action.Thus, it is plausible to assume that SWM is an interface module operating between the long-term and short-term memory, rather than postulating another component specifically dealing with linguistic syntax within the verbal WM part of Baddeley's model.
1) I wonder [ CP whom i [ IP the giraffe will follow t i ]] -embedded question (2) I see [ NP the man i [ CP whom i [ IP the giraffe kicked t i ]]] -relative clause (3) [ CP What i do you think [ CP t i 'the frog did t i ] ]? -long-distance wh-question