Embodying Cognition: Gestures and Their Role in the Development of Thinking
Vasc, Dermina, Ionescu, Thea, Cognitie, Creier, Comportament
This paper is a review of the literature about, different, types of gestures and the functions they sewe across development., and about, how gesture can bring evidence to support, the embodied cognition approach. We will describe what, aspects of children's thinking are revealed when they start, using deictic, coiiventional and iconic gestures. We will discuss gesture's relation with language, symbol understanding, and learning. We will then present, evidence for how gesture has been shown to facilitate the connection between action and thought., arguing that, gesture reflects embodiment..
KEYWORDS: gesture, embodied cognition, learning, action, children
Gestures are a form of intentional communication based on spontaneous hand movements that do not lead to (direct) physical changes in the external world (Cartmill, Beilock, & Goldin-Meadow, 2012; Cartmill, Demir, & Goldin-Meadow, 2012; McNeill, 1992; Tomasello, 2008). The meaning of a gesture cannot be extracted in isolation from the verbal or the physical context in which it unfolds. As McNeill (2005) shows, gestures have two basic features: they carry meaning and are co-expressive with the simultaneous speech. Also each gesture is created spontaneously. The fact that we tend to use similar gestures when we talk about a certain action, a relation or an object's properties (e.g., putting our fingers in a closed circle and taking it toward the mouth, to represent drinking from a cup) is a result of the fact that gestures illustrate perceptual regularities present in our shared world. Gestural communication is based on our memories about motor, visual and spatial experiences (LeBaron & Streeck, 2000), that we assume are similar for the other person. The receiver can understand the meaning of gestures because he/she recognizes the actions or the shapes they refer to. Similarly, the communicator uses gestures tacitly assuming that the receiver has the required experience for understanding their meaning. In this way, gestural communication is based on a common conceptual ground, on the shared knowledge of what "we both know together" (Tomasello, Carpenter, Call, Behne, & Moll, 2005).
According to common sense, gestures are a part of nonverbal behavior, reflecting a person's hidden attitudes or emotions. As such, the role of our hands and body when we gesture would be just to stress what we want to communicate via language: they neither interact, nor influence our cognition. Even in traditional cognitivism, cognition is separated from the rest of the body, from affect and emotions, and from the interaction with the physical and social world (Calvo & Gomila, 2008). The research pioneered by McNeill (1992) and Goldin-Meadow (2006) contradicts the common sense perspective that gesture is just hand waving. Gestures convey, and influence, knowledge. In other words, hand movements reflect, but are also intermingled with our cognitive processes while we speak.
An accruing body of evidence in cognitive science in the last two decades suggests that the traditional view about cognition does not reflect the way in which our cognitive system is functioning (Boroditsky & Prinz, 2008; Ionescu, 2011; Laakso, 2011; Smith & Sheya, 2010). Let's consider the case of concepts, usually regarded as the abstract pieces of knowledge that our minds process in an amodal and independent way, being stored in our semantic memory (Barsalou, 2008). Recent data show that our representations depend on the modal systems of the brain and that they are multi-modal simulations, in other words a "re-enactment of perceptual, motor and introspective states acquired during experience with the world, body and mind" (Barsalou, 2009, p. 1281; see also Barsalou, 2003; Gainotti, Spinelli, Scaricamazza, & Marra, 2013; Gallese & Lakoff, 2005; Martin, 2007). Evidence for the latter approach comes, for example, from studies that show that there are costs associated with verifying different-modality properties for concepts. For instance, verifying a visual property (such as baby clothes - pastel) is quicker if it comes after another visual property (such as hair - fair) than after a tactile property (such as toast - warm) (Pecher, Zeelenberg, & Barsalou, 2003). But even if this approach explains well concrete concepts, the embodiment of abstract thought is still an open question (Boroditsky & Prinz, 2008; Wilson, 2008).
Our aim in this paper is to analyze gesture and thought from an embodied perspective. In other words, we propose that gestures are an intermediate step in- between situation-bound cognition and abstract cognition. We will argue for this by having an ontogenetic approach that we hope will underline how important gestures are for cognition, namely that they are in fact part and parcel of cognition.
The paper is structured as follows: first, we will make some conceptual clarifications regarding what gestures are and the different forms they can take. Then, we will describe the development of deictic, conventional and iconic gestares, and we will also discuss what roles gestures have been shown to have across development. In the final part, we will argue that gestures reflect embodiment. We will try, in the end, to draw some conclusions and discuss the implications of the reviewed research.
Types of gestures
Generally speaking, we can distinguish between (1) gesticulation, that are gestures that are co-expressive and synchronous with speech, (2) emblems, or conventional gestures, that are based on cultural conventions of using an arbitrary gesture to represent something (e.g., OK sign, nodding for No), (3) pantomime, that involves gestures in the total absence of speech, in order to describe a complex scene, and (4) signs used by deaf persons, that involve a system of symbols that follow the structure of verbal language (i.e., syntax, morphology) (McNeill, 1992, 2005). As one can easily see above, it is only gesticulation, or gestures as gesticulation, that is used together with speech; the other forms do not require verbal language.
If we take in particular gesticulation, there are also several types of gestures (McNeill, 1992):
(1) Deictic gestures usually involve pointing the index finger toward a specific object or location in the environment. Sometimes other parts of the body, like the head, or the elbows can be used in order to direct attention to a specific location (McNeill, 1992; 2005). Pointing is used to identify a specific referent present in the environment and is available even before language acquisition.
(2) Iconic gestures are used to visually illustrate aspects related to concrete elements (i.e., actions associated with them, or their form) (Goldin- Meadow & Wagner, 2005; McNeill, 1992). They can describe actions, objects or relations that are not currently available, but that we are capable to recreate in our minds starting from the speaker's gestures. Iconic gestures supplement the verbal communication with a visual illustration that is free from the language's constraints: syntax, morphology, sequentiality. This kind of gesture can incorporate conceptual knowledge about their referents: their canonical actions (e.g., hammering with a hammer), form (e.g., making a little circle with the index finger and the thumb to represent a coin), or salient perceptual characteristics (e.g., index and middle finger in a F form to represent a rabbit's ears).
(3) Metaphoric gestures are used to illustrate abstract elements and relations. McNeill (1992, 2005) distinguishes between a metaphoric use of form (the prototypical example being holding an imaginary object in the hand opened upward, to introduce an idea) and a metaphoric use of space (e.g., using the right hand for good, and the left hand for bad).
(4) Beat gestares are very simple up and down, or back and forth movements, used to highlight the prosody of the discourse (Cartmill, Demir, et al., 2012; McNeill, 1992, 2005). They can also be used to draw attention to a specific element of the discourse.
This second taxonomy is called in the literature the Iconic-Metaphoric- Deictic-Beat Quartet (McNeill, 1992) and it is widely used. However, the author mentions that these different types of gestures are not nominal categories, but rather dimensions, because in one gesture we can observe amounts of different types of gesture (McNeill, 1992, 2006).
As we will see further, the relation between gesture and speech is not the same across development. For example, the pointing gesture, though included in the gesticulation category, obviously does not appear together with language in preverbal children. Moreover, studies that analyze children's iconic gestures usually do not offer a simultaneous verbal label when presenting gestures (e.g., Namy, Campbell, & Tomasello, 2004; Namy & Waxman, 1998; Tolar, Lederberg, Gokhale, & Tomasello, 2008; Tomasello, Striano, & Rochat, 1999). The present paper is centered on gestures as gesticulation (especially on iconic and deictic gestures), but it will also present to a little extent the development of conventional gestures.
Gestures throughout development
This section discusses the development of gesture and its relationship to language learning in particular and conceptual learning in general, and also to symbolic development. We will present evidence about when children start to use deictic, conventional and iconic gestures. The focus will be on iconic gestures because the literature makes a clear link between them and symbolic development. When children start using gestures at adult levels, the focus on just one type of gesture is no longer relevant, because, as mentioned before, one gesture can have multiple dimensions and a child can use both deictic and iconic gestures to depict an event (McNeill, 1992, 2005). So, when we will talk about preschoolers' and schoolchildren's gestures, our focus will shift from analyzing the role of one particular type of gesture, to considering the role of gestures in general.
Infants' gestures in relation to language acquisition and consolidation
The gesture of pointing is an important milestone for infants' language and social development (Colonnesi, Stams, Koster, & Noom, 2010; Goldin-Meadow, 2007; Tomasello, Carpenter, & Liszkowski, 2007). Its function is mainly to direct the other person's attention toward an element present in or elicited by the current perceptual context (Tomasello & Liszkowoski, 2007). Infants also point to the location of an object that previously was in the joint attentional frame, but is currently absent (Liszkowski, Schäfer, Carpenter, & Tomasello, 2009). Infants use two types of pointing: protodeclarative - when they want to attract the attention of adults toward the thing they point to - and protoimperative - when they request something (Bates, Camaioni, & Volterra, 1975). On average, children begin to point at the end of the first year of life (at about 11-12 months), but the onset can range anywhere between 7 to 15 months (Colonnesi et al., 2010; Tomasello et al., 2007).
While some consider that pointing is used only to trigger reactions from others or to obtain something from them (the so called cognitively lean approach, Moore 1996; Shatz and O'Reilly 1990, as cited in Tomasello, 2008), others (Tomasello et al., 2007; Tomasello, 2008) opt for a cognitively rich and motivationally altruistic interpretation. They consider that infants' pointing denotes an early emerging, and unique human ability to cooperate and create joint attention and intention episodes, based on a common ground between the participants in a communicative act. In their perspective, pointing cannot accomplish its goal just by directing one's own or the other's attention to the right location (the "what" component) (Tomasello et al., 2007; Tomasello, 2008). What is also needed is to understand the point of pointing, namely the reason why the other wants you to attend to something (the "why" component). Understanding the motive behind pointing is built on a joint attentional frame, on a common ground between two persons that both "know that they know something together". This clarifies the reason of pointing and allows the sharing of attitudes, intentions, or emotions (Tomasello, 2008). When children do not yet produce words, we can observe their advanced mindreading skills and cooperation motives by looking at the pointing gestures that they use in order to communicate a variety of attitudes and intentions. So, by taking into consideration the gestures of infants we can elucidate their newly emerging communication and social-cognitive abilities.
In agreement with the perspective of Tomasello et al. (2007), Goldin- Meadow (2007) considers that children point in order to share their interest about specific objects in the environment about which they want to communicate, and not just to direct the adult's attention to themselves. Moreover, Goldin-Meadow (2007) considers that pointing does not just precede children's ability to use verbal language in order to communicate, but can also predict and facilitate language learning. On the one side, parents observe the gestures that children produce and offer labels for the indicated objects. On the other side, children are encouraged to learn the labels for the objects that parents bring to their shared focus of attention (Cartmill, Dermir, et al., 2012; Goldin-Meadow & Alibali, 2013). And, taking a step forward in developmental time, pointing continues to have a beneficial role in language development, even after children begin producing one word utterances (Cartmill, Demir, et al., 2012; Goldin-Meadow, 2007; Goldin-Meadow & Alibali, 2013). Children first begin to produce two-word sentences by combining a gesture and a word, before they can do this only in speech (e.g., a child points to a bird while saying nap, Goldin-Meadow & Alibali, 2013). Children also point to objects for which they do not yet have the verbal label, using words to show their interest (e.g., "Look at that!", Colonnesi et al., 2010) as they direct adult's attention to the object of interest. It appears that, beside the fact that early pointing predicts the size of later vocabulary (Goldin-Meadow & Wagner, 2005; Goldin-Meadow, 2007; Iverson & Goldin-Meadow, 2005; Rowe & Goldin-Meadow, 2009), the age at which children begin to use these cross-modal utterances (i.e., a gesture plus a word) is predictive for the age at which they will start to form these kind of utterances using two-word combinations (Goldin-Meadow & Butcher, 2003; Goldin-Meadow, 2007; O/çaliskan & Goldin-Meadow, 2005). These combinations also predict the complexity of the sentences they will form at 42 months (Rowe & Goldin-Meadow, 2009). Later in development, more complex (iconic) gestures are used in combination with verbal propositions in order to form sentences with several propositions (Goldin-Meadow, 2007; Ozçaliskan & Goldin-Meadow, 2009). In this way, gestures are like scaffolds used for exercising the production of increasingly complex linguistic structures before verbal language alone can accomplish this goal. A recent meta-analysis (Colonnesi et al., 2010) offers further support for the predictive role of early pointing for later language development, showing that infants' pointing comprehension and frequency is predictive for their language development later in life. This longitudinal relation becomes significantly stronger with development (namely, during the second year of life).
While pointing is a phenomenon that has been intensively studied, there is less research about the emergence and developmental course of non-deictic gestures, which are considered more advanced than pointing because they refer to objects that are not present in the communication context. When they point, children refer to an object by simply indicating it, a behavior that does not require an understanding of symbols. Non-deictic gestures, on the other hand demonstrate: (i.) an ability to use symbols based on arbitrary associations (conventional gestures, baby signs, and signs), or in addition, the ability to detect an (ii.) iconic or (iii.) abstract resemblance between the gesture and its referent (iconic or metaphoric gestures). Beat gestures, on the other hand, reflect the capacity to use complex discursive structures, and they can be observed only when an adult level of language development is attained. So, iconic and beat gestures increase their frequency and complexity in parallel with language development (Mayberry & Nicoladis, 2000).
Conventional gestures and baby signs
In the second year of life children produce conventional gestures (e.g., waving the hand for Bye-bye, nodding for No, putting the index finger in front of the mouth and shushing for silence, or other types of signs whose meaning is established across repeated parent-child interactions). These gestures are acquired by imitation and cultural learning (Iverson, Capirci, & Caselli, 1994; Tomasello, 2008) and are considered arbitrary symbols. Infants can also produce baby-signs, a system of signs that infants are encouraged to learn in infant schools or/and by previously trained parents. As Namy & Waxman (1998) mention, children can use both words and symbolic gestures as labels at the onset of language development, but an object usually has either a word label, or a gesture label - not both. As language steps in, children prefer to use mostly words (Acredolo & Goodwyn, 1985, 1988; Iverson et al., 1994), and, by 26 months, they will be reluctant to accept an arbitrary gesture as a label, although they will accept an iconic gesture as a label, in comparison with 18-month-olds' who accept both arbitrary and iconic words as labels (Namy et al., 2004; Namy & Waxman, 1998; Tomasello et al., 1999). Some studies report that children who learn these signs and use them to communicate with their parents will have larger vocabulary later in development (Acredolo & Goodwyn, 1988; Goodwyn, Acredolo, & Brown, 2000), but others didn't found data to support this beneficial role (for a review, see Johnston, Durieux-Smith, & Bloom, 2005).
Iconic gesture emergence and symbolic development
By using iconic gestures we outline the shape of our thoughts: we can recreate images of the actions, objects or events we are communicating about, so that we better communicate our message. Iconic gestures are different from other communication symbols, like language (verbal or sign language), the latter being based on learning an arbitrary connection between meaning and form. The connection between an iconic gesture and its referent is not deliberately learned; rather it can be deduced, on the basis of our conceptual knowledge. Iconic gestures are perceptually similar with their referents, so that, as McNeill (1992) shows, when we describe the form of a gesture we also describe its meaning.
There is less research about the age when children start to produce or comprehend iconic gestures, but recent studies suggest that this ability emerges around the beginning of the third year of life (Namy et al., 2009; Namy & Waxman, 1998; Namy, 2001, 2008, O/çaliskan & Goldin-Meadow, 2011). O/çaliskan and Goldin-Meadow (2011) observed that around 26 months there is a spurt in children's spontaneous production of iconic gestures, although their frequency is still reduced, compared to adults. At this same age, compared with younger children (18 and 22 month-old), children demonstrate a more robust ability to comprehend the iconicity in gestures (Namy, 2008). Others show that it is not until 36 months that children really understand iconicity (Tolar et al., 2008). Namy (2008) found that at 26 months, children were able to recognize the object to which an iconic gesture referred to, although the objects and their actions were all new to them. After learning the actions that can be performed with the new objects, children were presented with iconic gestures referring to each object's specific action. Although there was no explicit association between actions and gestures, they correctly understood to which object the gesture referred to. This experimental paradigm is a stronger test for whether children do indeed apprehend iconicity, as it eliminates the possibility that they map an iconic gesture to a specific object based rather on repeated associations and ritualized actions with that object in their home environment, and not by understanding a gesture's iconicity (Namy, 2008). These more recent studies show that, although young children can use with equal facility arbitrary gestures or words as labels for objects, iconic gestures' understanding emerges only later, and is fragile in children younger than 26 months. These results contradict the classical presupposition that understanding iconic symbols is a prerequisite for understanding arbitrary symbols (Werner and Kaplan, 1963). This perspective was based on the assumption that, due to the similarity with their referents that can be detected without any previous explicit association between the two, iconic gestures are more easily interpreted than are arbitrary ones (Tolar et al., 2008). The fact that children initially learn with equal facility both iconic and arbitrary symbols, suggests that they do not detect a difference between the two: they don't extract the iconicity in the gestures that are presented to them (Namy et al., 2004; Namy & Waxman, 1998; Tomasello et al., 1999). On the other hand, by the time children start using verbal language - a system that is based on arbitrary connections between the meaning and the form of the words - evidence of a clear and robust understanding of gesture iconicity is scarce. This demonstrates that there is no advantage of iconicity compared to other arbitrary symbols at the onset of symbolic development, but rather the reverse is true (Namy et al., 2004). What counts more in the facility with which young children (before two years) learn symbols, appears to be the frequency of symbol-referent associations to which children are exposed. The advantage of iconicity emerges later in development, suggesting that children need to reach a more advanced level of cognitive development in order to understand iconicity (Namy et al., 2004; Namy, 2008; Tomasello, 2008). Additional support in favor of this developmental sequence comes from the fact that deaf children learning sign language acquire with equal facility arbitrary and iconic signs, whereas adults seem to understand and learn more rapidly iconic signs (Tolar et al., 2008). Arbitrary gestures or conventionalized ones are learned in modalities similar to verbal language, that is, by repeated associations between object and labels, or from ritualized routines (Tomasello, 2008). In contrast, iconic gestures are more complex, because, in addition to symbolic representation, their understanding is based on the ability to process the abstract pictorial resemblance between a symbol and its referent, so categorical knowledge is probably also needed.
All of the experimental studies described so far treated iconic gestures rather as labels and did not combine them with a simultaneous verbal message. This is an important aspect, given that it is considered that gesture and speech cannot be studied in separation, because they are part of the same integrated communication system (Goldin-Meadow & Alibali, 2013; Goldin-Meadow & Wagner, 2005; McNeill, 1992, 2005). In a recent study, Stanfield, Williamson, and O/çaliskan (2013) investigated the age at which children start to comprehend iconic gestures in combination with a simultaneous verbal message. Their results show that only 3 and 4 years old children, and not 2 years old, performed above chance. This suggests a protracted developmental course for the ability to comprehend iconic gesture in combination with speech.
Striano, Rochat, and Legerstee (2003) reported that 20 month-old-children performed well when they were required to give an object represented by the experimenter with an iconic gesture, but only if that object's canonical action was previously modeled. At 24 months, they succeeded even in the absence of modeling, similarly with the results obtained by Namy (2008) and Ozçaliskan and Goldin-Medow (2011), who found a progress in iconicity interpretation at 26 months. However, as Tolar et al. (2008) mention, because the gesticulated actions were highly conventional, there is a high possibility that children's performance at 24 months is based on the fact that those actions were very familiar for them, ritualized, so that we cannot say for sure that their success is based on the fact that they grasped gesture's iconicity. Because children recognized the gestures even if they were executed in a highly-conventional manner (e.g., knocking with a closed fist to pretend hammering), or in a low-conventional one (e.g., hammering with the elbow), an alternative explanation would be that children find it easier to extract the iconicity from the symbols that represent actions that are highly familiar for them. This suggests that experience in manipulating objects could be an important factor in iconicity understanding, and that only after children have a certain amount of experience with the diverse actions that objects afford can they extract the motor regularities from those actions, and can subsequently represent those actions using gestures. Another factor that could play an important role in grasping iconicity is the actions and gestures that children observe at and imitate from their parents. For example, Namy, Vallas, and Knight-Schwarz (2008) found a strong association between parents' pretend play routines and children's iconic gestures at 16-22 months.
An interesting aspect noticed by McNeill (1992) is that children's iconic gestures constitute more ample and concrete representations of the events they describe, compared to adults' iconic gestures. Children tend to use all their body and all the space around them as they depict a scene, compared to the adults. Adults' gestares involve slight changes of body postares, and more hands moving. Also, their gesture space is located in front of them, and not all around them, as it usually happens in children. McNeill (1992) considers that children's first iconic gestures are not completely detached from their corresponding actions, being rather enactments. For example, a 2 '? child observed by McNeill (1992), talked about a cartoon character, Sylvester, that climbed up a pipe. As the child was narrating this, he descended his chair and walked toward another one, with the intention to climb it up. The fact that children's gestures involve all the body and are of very similar amplitude and size with the actions depicted in the real setting in which the event took place, illustrates, according to McNeill's (1992) interpretation, that children are capable only of a partial symbolism. This could also suggest that for children it is more difficult to verbally describe the actions involved in a complex scene, and so the gestures are used to offer o more accurate description of the visual scene. Adults, on the other side, have a more diverse spatial vocabulary repertoire, so that their gestures need not to take all the burden of communication, but rather supplement it. McNeill considers that, by investigating gesture production along the preschool years, we could better understand the progress that children make in reenacting events at a more symbolic scale, and so, we could gain a more complete understanding of their symbolic development (McNeill, 1992).
Gesture's role in children's learning
Some consider that children move from a stage when their gestures were scaffolds for language development, predicting progresses in language, to a stage when their gestures start to have more adult like functions. As they become proficient language users, gestures take the load of enriching the verbal discourse: illustrating complementary details and supplementing the information depicted in the verbal message (McNeill, 1992; Ozçaliskan & Goldin-Meadow, 2009).
Others show that gestures continue, and maybe become even more important, in predicting and facilitating cognitive changes whenever children are in transition and learn new things in a wide area of domains. Preschoolers and schoolchildren spontaneously gesticulate when confronted to: algebraic equations (Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Perry, Breckinridge Church, & Goldin-Meadow, 1988), Piagetian conservation tasks (Breckinridge Church & Goldin-Meadow, 1986), balancing assymetric beams (Pine, Lufkin, & Messer, 2004), the Tower of Hanoi problem (Beilock & Goldin-Meadow, 2010; Garber & Goldin-Meadow, 2002), mental rotation tasks (Chu & Kita, 2011), board-games (Evans & Rubin, 1979), seasonal change problems (Crowder & Newman, 1993), and counting problems (Alibali & DiRusso, 1999). These studies showed that the gestures children produce while they explain how they solved a problem are an index for cognitive change. Gestures are a unique measure of cognitive change in real time, as in their gesturing children demonstrate implicit, but more advanced knowledge, that they are not yet able to articulate verbally or use properly to solve a task (Goldin-Meadow & Wagner, 2005; Goldin-Meadow, 2006; Goldin-Meadow, 2009). The results of these studies show that sometimes, and especially when they are in transition to a higher level of understanding, children offer different, more advanced explanations in their gestures compared to the information that they verbally articulate. These so called gesture-speech mismatches offer a window into children's newly emerging ideas for solving a specific task, showing what they are prepared to learn (in other words, their zone of proximal development). For example, non-conservers who produce gesture-speech mismatches in the Piagetian volume conservation task when explaining whether the amount of liquid in the two glasses are the same or different, are more likely to acquire the concept of volume conservation after further instruction, compared with non-conservers that do not produce gesture-speech mismatches (Church & Goldin-Meadow, 1986). A child that explains that there is a "different" amount of water in the two glasses after transformation, but uses one hand to indicate the diameter of the thiner glass, and two hands to indicate that of the wider one, produces a gesture speech mismatch. His/her gestures indicate that he/she incorporated another important dimension for understanding volume, namely that of the diameter, and is soon going to understand the concept of volume conservation. On the other side, a child that says the same thing, and produces a matching gesture (i.e., pointing to the different levels of the water in the two glasses) does not have the same increase in performance after a lesson, compared to the child who produces a mismatch (Goldin-Meadow and Wagner, 2005, Goldin-Meadow, 2006).
Rather than being merely associates of cognitive change, gestures can also lead to change. It has been demonstrated that gesture can facilitate children's learning because:
a) teachers implicitly process the knowledge that children express only in their gestures, and subsequently offer children information that is attuned to their zone of proximal development (Goldin-Meadow & Wagner, 2005);
b) children learn better if their teachers gesture in class. For example, if teachers themselves produce gesture-speech mismatches children learn better than if teachers instruct them with one, or two strategies in speech only (Singer & Goldin-Meadow, 2005). Also, children gesture more if their teachers gesture during the lesson (Cook & Goldin-Meadow, 2006);
c) if children gesture, their working memory "works" better, because they use less cognitive resources for speaking. For example, if children gesture when explaining how they resolved a mathematical equivalence problem, while required in the same time to hold in mind a list of previously learned words, they remember more words than when not gesturing (Goldin- Meadow, Nusbaum, Kelly, & Wagner, 2001). This lightning effect is maintained even if children gesture about non-present objects (Ping & Goldin-Meadow, 2010);
d) explicitly encouraging children to gesture helps children to discover new strategies about solving equivalence problems, compared with the situations when they do not gesture by choice, or are told no to gesture (Broaders et al., 2007; Goldin-Meadow et al., 2001);
e) gestares help in retaining information in long term-memory. For example, children who were asked to reproduce specific gestures about solving algebraic equations solved more problems at a follow up test, 4 weeks later, compared to children that were taught those strategies only in speech (Cook, Mitchell, & Goldin-Meadow, 2008);
f) when using gestures, messages are better comprehended, remembered and the transfer of knowledge is improved (Hostetter, 2011).
Different types of gestures have different developmental pathways, with pointing dramatically decreasing in frequency as children become proficient language users, and iconic gestures being produced more and more when they communicate. As their discourse becomes more elaborated and they start using longer sentences, with complex syntactic structures, beats and metaphoric gestures also increase their frequency in children's gesture repertoire (Cartmill et al., 2012; McNeill, 1992). Still, as we have shown above, there are many unanswered questions about the mechanisms involved in gesture development during childhood, and more studies are needed in order to understand how children start gesturing at adult level.
The evidence presented also suggests that gesture can lead to a more complete understanding of children learning language, symbols, and a variety of new concepts in general. Part of the evidence suggests that the benefit can reside in observing others' gesture, and the other part suggests that gestures can have a benefit for the individual performing them.
Gestures reflect embodiment
Gesture's role in grounding cognition
Gesture studies (some described above) bring valuable empirical evidence for the embodied cognition approach. The studies we mentioned so far show that we use our hands to make our first social connections with the world: to communicate with others and to learn language. Gestures continue to have an important role across development, when children learn new concepts or strategies to solve problems. Gesturing thus reflects embodied learning (Kontra, Goldin-Meadow, & Beilock, 2012). In what follows we will present evidence about how gesture supports an embodied view of cognition. However, as developmental evidence is relatively scarce, we will present studies that have adult participants.
Goldin-Meadow and Beilock (2010), and Hostetter and Alibali (2008) consider that gesture is a critical test for the embodiment perspective. The fact that gestures convey knowledge inherent in the cognitive system offers important support for the idea that this knowledge is closely tied to the body, and as such embodied (Hostetter & Alibali, 2008). Their framework, "Gesture as simulated action" (GSA, Hostetter & Alibali, 2008), stipulates the fact that gestures reside in the brain's sensory-motor simulations of previous experiences (see the Introduction for the view about representations as multi-modal simulations). Gestures appear whenever these simulations become overt. A prediction of this framework is that people gesture more when speaking about experiences that involve actions or perceptual relations. This was demonstrated in a study in which participants were presented certain spatial information either by watching an animated cartoon, or by reading a description about that cartoon (Hostetter & Hopkins, 2002). Participants that saw the cartoon subsequently gestured more that those who just read the same information, probably because they could rely on more ample spatial relations (Hostetter & Alibali, 2008; Hostetter & Hopkins, 2002).
Moreover, Hostetter & Alibali (2010) demonstrated that we gesture more when we speak about actions that we ourselves performed, than when we describe information that we just visually experienced. In their study (Hostetter & Alibali, 2010), one group of adults was instructed to remember certain dots patterns, and the other group was instructed to physically create certain patterns of dots. When later asked to describe those patterns, participants that actively created the patterns gestured more than those that were simply asked to remember the visual patterns. This supports the GSA framework, as more overt actions (gestures) should be observed when the information conveyed is saturated in motor actions.
Gestures as a way to better understand the connection between action and cognition
The movements of our hands when we act on objects and when we represent those acts on objects (i.e., when we gesture), are similar, but as Cartmill, Beilock, et al. (2012) highlight, gestures are representations and not mere copies of actions. Gestures are in the same time kinetically close to actions and representationally close to thoughts (Cartmill, Beilock, et al., 2012). This is why the authors consider that analyzing gesture could be an ideal way to understand the connection between action and thought. Similarly, Goldin-Meadow & Beilock (2010) propose that gestures influence cognition because they facilitate this connection. They present evidence (see below) that in some cases gestures can have even a bigger influence on subsequent cognitive representations than action itself.
As Cartmill, Beilock, et al. (2012) and Beilock and Goldin-Meadow (2010) stipulate, gestures can bring a unique contribution to a better understanding of how cognition is embodied. The actions that we perform are the foundation of the subsequent sensory-motor and perceptual simulations that the brain performs when representing those actions. The gestures we produce are an external reflection of these simulations. As we will show later, they can also influence these simulations, just like action does.
Kontra et al. (2012) point out that action experience and early sensory- motor learning shape cognition. If we investigate the actions that individuals experience across the lifespan, we can better understand the process of embodied learning: how are concepts built in developmental time, as a result of our actions (Kontra et al. 2012; Cartmilll, Beilock, et al., 2012). Evidence about how our action history influences how our brains simulate performing those actions is offered by Calvo-Merino, Glaser, Grèzes, Passingham, & Haggard (2005). They showed that there are differences regarding activated brain areas as a function of previous expertise with the observed actions (Calvo-Merino et al., 2005). Calvo-Merino et al. (2005) found that different simulations are carried out in the brain when dancers watch videos of classical ballet (with which they had expertise) or videos with capoeira movements (at which they were novices). When watching videos with their own dance style, the activated brain areas were similar to those engaged in performing those movements. This pattern of brain activation was not observed in the situation when they observed capoeira, for which they were not trained. These studies demonstrated that action has an influence on thought. But similar results were obtained when analyzing if gesture could also have such an influence.
Beilock and Goldin-Meadow (2010) asked participants to solve the Tower of Hanoi task (TOH1). After that, the participants had to explain their performance, and their spontaneous gestures were assessed. Participants were then divided into two groups. One group - the switch group, was asked to solve a modified version of the classical TOH task (TOH2) in which the weights of the disks were inverted so that the smallest disk was the heaviest (and could be lifted only with two hands) and the bigger one was the lightest. The other group - the no-switch group, was presented again with the classical version TOH1, in which the smallest disk also had the smallest weight and could be lifted with one hand. The performance of the switch group was then compared with that of the no-switch group when solving TOH for the second time. The more the participants in the switch group used one hand gestures to describe moving the smallest disk, the more their performance decreased when solving TOH2. No such effect was observed for those in the no- switch group. In a second experiment, the explanation step was skipped for one group of participants that solved both TOH1 and TOH2. These participants had a better performance than those from the first study that explained (and gestured) their performance in TOH1 before switching to TOH2. This suggests that gesturing can affect performance if it is incompatible with subsequent actions (Beilock and Goldin-Meadow, 2010).
Another illustration about how gestures facilitate thinking by connecting it to action is offered by Carlson, Avraamides, Cary, and Strasberg (2007). They consider that gestures are a strategy to externalize the representation currently activated in working memory. When tools or objects are acted upon, or when words or digits are named or written, thinking is facilitated and an otherwise abstract task can be solved more easily. Carlson et al. (2007) consider that gestures have similar benefits. Gestures are seen as scaffolds that can offer a visual (but also kinetically) feedback about our progress in solving a task (Carlson et al., 2007). In this way, gesture facilitates the updating of the representations that are active in working memory. They analyzed the externalizing function of gesture in a counting task, where the pointing gesture could increase performance by allowing the counter to mark the elements that were already counted. This strategy facilitates the application of the "one to one" principle of counting (i.e., each element is counted only once). Their results show that adults have a higher accuracy and speed (efficiency) if they point while they count. Also, when pointing is prohibited, individuals tend to nod their heads, or the functions previously accounted by gesture are supplemented by compensatory speech. This suggests that gestures are used to index the relevant external elements for the task to be solved, and demonstrates that they can be seen as external scaffolds for cognition (Carlson et al., 2007). A similar view is supported by Spivey, Richardson and Fitneva (2009), but with reference to eye movements. The external context is seen as a memory database that can be accessed by eye-movements (see also Clark, 2011, and Clark & Chalmers, 1998, for the Extended Mind account). These movements create spatial anchors that facilitate retrieving of elements from long term memory, even if the elements are not currently present in the environment. Spivey et al. (2009) consider that indexing toward a location of a now absent element resides in the sensory-motor simulations that are happening in the brain, when a specific concept is represented. According to the GSA framework, abstract pointing, that is a metaphoric way to point to imaginary objects in space (McNeill, 1992), or the concrete pointing described by Carlson et al., (2007) could also originate in brain's simulations.
Conclusions and implications
Throughout this paper we described the important functions that gestures serve along development. Gestures predict children's readiness in learning: first in language and as gesture consolidate across development, in a variety of domains. We presented evidence that gesture can also facilitate children's learning and we described the research about the ontogenetic origins of children's gestures. Little is currently known about how children gain an understanding of iconic gestures and start using them at adults' levels. Smith and Pereira (2009) described developmental loops in which, in different developmental times, language influences symbolic understanding and then symbolic understanding influences language. The same changing relation could be true between iconic gesture and symbolic understanding, but further studies are needed in order to analyze systematically the relation between the two. By understanding the development of gestures we could also better understand their pervasive role in cognition. We propose that by better understanding toddlers' and preschoolers' iconic gestures we will gain a more complete and probably, a complementary perspective about their conceptual development. This prediction is based on the relations that are already documented between symbolic understanding and iconicity. Taking in consideration the studies that demonstrated the role of gesture in learning language, or in learning in educational contexts, we can expect that gesture could shed light about the mechanisms involved in conceptual development. In the last part of the paper we tried to show how gesture studies bring support for the embodied cognition approach. By better comprehending the way gestures influence thinking, especially across development, we could better understand the connection between action, perception, and thinking, and as a consequence the way cognition is grounded. A possible explanation is proposed by Kinsbourne (2006): adults' gestures have their ontogenetical origins in basic behaviors that can be tracked down to the behaviors of infants. The author proposes that action and cognition are not separated and that an infant expresses his thoughts by movements that are available to him (e.g., extensions of different parts of the body for expressing withdrawal, and flexion for approach) (Kinsbourne, 2006). These synergisms later constrain gestures' meanings, meanings that reflect thinking. Although metaphoric gestures were not analyzed throughout this review, they might present a solution to the problem of the representation of abstract concepts, similarly to the way metaphors in general are considered a mechanism of grounding abstract concepts in the modal systems of the brain (Casasanto, 2009; Gibbs, 2006). Müller (2008) considers that metaphoric gestures are hand movements that use multi-modal mappings in order to share an idea or emotion. In her opinion, gestures are formed from embodied daily activities that they iconically recreate (Müller, 2008). By analyzing gesture, we could reach a better understanding regarding the structure of concepts and their embodiment (Müller, 2008). As such, gestures may be one answer for the question about the embodiment of our abstract thoughts: gestures might provide an alternative representational format (Clark, 2011).
It is possible that the following decades of research on cognition will prove that "cognition may simply be the operation of a complex system of noncognitive processes" (Smith & Sheya, 2010, p. 1).
Acredolo, L., & Goodwyn, S. (1985). Spontaneous signing in normal infants. In Biennial Meetings of the Society for Research in Child Development, Toronto.
Acredolo, L., & Goodwyn, S. (1988). Symbolic gesturing in normal infants. Child development, 59(2), 450-466.
Alibali, M. W., & DiRusso, A. A. (1999). The function of gesture in learning to count: more than keeping track. Cognitive Development, 14(1), 37-56.
Barsalou, L.W. (2003). Situated simulation in the human conceptual system. Language and Cognitive Processes, 18, 513-562.
Barsalou, L. W. (2008). Grounded Cognition, Annual Review of Psychology, 59, 617-645
Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philosophical Transaction of the Royal Society B, 364, 1281-1289.
Bates, E., Camaioni, L., & Volterra, V. (1975). The acquisition of performatives prior to speech. Merrill-Palmer Quarterly, 21(3), 205-226.
Beilock, S. L., & Goldin-Meadow, S. (2010). Gesture Changes Thought by Grounding It in Action. Psychological Science, 21(11), 1605 -1610.
Boroditsky, L., & Prinz, J (2008). What thoughts are made of. In G. R. Semin & E. R. Smith (Eds.), Embodied Grounding. Social, Cognitive, Affective, and Neuro scientific Approaches, Cambridge University Press, New York.
Breckinridge Church, R., & Goldin-Meadow, S. (1986). The mismatch between gesture and speech as an index of transitional knowledge. Cognition, 23(1), 43-71.
Broaders, S. C., Cook, S. W., Mitchell, ?., & Goldin-Meadow, S. (2007). Making children gesture brings out implicit knowledge and leads to learning. Journal of Experimental Psychology: General, 136(A), 539-550.
Calvo P., & Gomila A. (2008) (Eds.), Handbook of Cognitive Science: An Embodied Approach, Elsevier, San Diego.
Calvo-Merino, ?., Glaser, D. E., Grèzes, J., Passingham, R. E., & Haggard, P. (2005). Action Observation and Acquired Motor Skills: An 1MRI Study with Expert Dancers. Cerebral Cortex, 15(8), 1243-1249.
Carlson, R. ?., Avraamides, M. N., Cary, M., & Strasberg, S. (2007). What do the hands externalize in simple arithmetic? Journal of experimental psychology. Learning, memory, and cognition, 33(A), 747-756.
Cartmill, ?. ?., Beilock, S., & Goldin-Meadow, S. (2012). A word in the hand: action, gesture and mental representation in humans and non-human primates. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 567(1585), 129-143.
Cartmill, ?. ?., Demir, ?. ?., & Goldin-Meadow, S. (2012). Studying Gesture. In Erika Hoff (Ed.). Research Methods in Child Language: A Practical Guide, 1st Edition, Wiley Blackwell Ltd.
Casasanto, D. (2009). Embodiment of abstract concepts: good and bad in right- and left- handers. Journal of Experimental Psychology: General, 138(3), 351-367.
Chu, M., & Kita, S. (2011). The nature of gestures' beneficial role in spatial problem solving. Journal of Experimental ? sychology: General, 140(1), 102-116.
Clark, A. (2011). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press, New York.
Clark, ?., & Chalmers, D. J. (1998). The Extended Mind. Analysis, 58, 10-23.
Colonnesi, C., Stams, G. J. J. M., Koster, I., & Noom, M. J. (2010). The relation between pointing and language development: A meta-analysis. Developmental Review, 50(4), 352-366.
Cook, S. W., & Goldin-Meadow, S. (2006). The Role of Gesture in Learning: Do Children Use Their Hands to Change Their Minds? Journal of Cognition and Development, 7(2), 211-232.
Cook, S. W., Mitchell, ?., & Goldin-Meadow, S. (2008). Gesturing makes learning last. Cognition, 106(2), 1047-1058.
Crowder, ?. M., & Newman, D. (1993). Telling what they know: The role of gesture and language in children's science explanations. Pragmatics & Cognition, 1(2), 341- 376.
Evans, ?. ?., & Rubin, ?. H. (1979). Hand Gestures as a Communicative Mode in School- Aged Children. The Journal of Genetic Psychology, 135(2), 189-196.
Gainotti, G., Spinelli, P., Scaricamazza, ?., & Marra, C. (2013). The evaluation of sources of knowledge underlying different conceptual categories. Frontiers in Human Neuroscience, 7, 1-12.
Galesse, V., & Lakoff, G. (2005). The Brain's Concepts: The Role of the Sensory-Motor System in Conceptual Knowledge. Cognitive Neuropsychology, 22, 455-479.
Garber, P., & Goldin-Meadow, S. (2002). Gesture offers insight into problem-solving in adults and children. Cognitive Science, 26(6), 817-831.
Gibbs, R. W. (2006). Embodiment and cognitive science. Cambridge University Press.
Goldin-Meadow, S. (2006). Nonverbal communication: The hands role in talking and thinking. In W. Damon, R. Lerner, D. Kuhn and R. Siegler (Eds.), Handbook of Child Psychology, Sixth Edition, Volume Two: Cognitive Perception and Language, New York: John Wiley & Sons, Inc.
Goldin-Meadow, S. (2007). Pointing Sets the Stage for Learning Language-and Creating Language. Child Development, 78(3), 741-745.
Goldin-Meadow, S. (2009). How Gesture Promotes Learning Throughout Childhood. Child Development Perspectives, 3(2), 106-111.
Goldin-Meadow, S., & Alibali, M. W. (2013). Gesture's Role in Speaking, Learning, and Creating Language. Annual Review of Psychology, 64(1), 257-283.
Goldin-Meadow, S., & Beilock, S. L. (2010). Action's Influence on Thought: The Case of Gesture. Perspectives on Psychological Science, 5(6), 664 -674.
Goldin-Meadow, S., & Butcher, C. (2003). Pointing toward two-word speech in young children. In S. Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 85-107). Mahwah, NJ: Erlbaum.
Goldin-Meadow, S, Nusbaum, H., Kelly, S. D., & Wagner, S. (2001). Explaining math: gesturing lightens the load. Psychological Science, 12(6), 516-522.
Goldin-Meadow, S., & Wagner, S. M. (2005). How our hands help us learn. Trends in cognitive sciences, 9(5), 234-241.
Goodwyn, S. W., Acredolo, L. P., & Brown, C. A. (2000). Impact of Symbolic Gesturing on Early Language Development. Journal of Nonverbal Behavior, 24(2), 81-103.
Hostetter, A. B. (2011). When do gestures communicate? A meta-analysis. Psychological Bulletin, 137(2), 297-315.
Hostetter, A. B., & Alibali, M. W. (2008). Visible embodiment: gestures as simulated action. Psychonomic Bulletin & Review, 15(3), 495-514.
Hostetter, A. B., & Alibali, M. W. (2010). Language, gesture, action! A test of the Gesture as Simulated Action framework. Journal of Memory and Language, 63(2), 245-257.
Hostetter, A. B., & Hopkins, W. D. (2002). The effect of thought structure on the production of lexical movements. Brain and Language, 82(1), 22-29.
Ionescu, T. (2011). Abordarea "embodied cognition" §i studiul dezvoltärii cognitive. Revista de Psihologie, 57, 326-339.
Iverson, J. M., Capirci, O., & Caselli, M. C. (1994). From communication to language in two modalities. Cognitive Development, 9(1), 23-43.
Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for language development. Psychological Science, 16(5), 367-371.
Johnston, J. C., Durieux-Smith, ?., & Bloom, K. (2005). Teaching gestural signs to infants to advance child development: A review of the evidence. First Language, 25(2), 235-251.
Kinsbourne, M. (2006). Gestures as embodied cognition: A neurodevelopmental interpretation. Gesture, 6(2), 205-214.
Kontra, C., Goldin-Meadow, S., & Beilock, S. L. (2012). Embodied Learning Across the Life Span. Topics in Cognitive Science, 4(A), 731-739.
Laakso, A. (2011). Embodiment and development in cognitive science. Cognition, Brain, Behavior: An Interdisciplinary Journal (Special Issue: Embodiment and Development), 15(A), 409-425
LeBaron, C., & Streeck, J. (2000). Gesture, knowledge, and the world. In McNeill, D. (Ed.), Language and gesture. Cambridge University Press.
Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20(5), 654-660.
Maouene, J., & Ionescu, T. (2011). Editorial: Embodiment and Development. Cognition, brain, behavior an interdisciplinary journal (Special Issue: Embodiment and Development), XV, 403-408.
Martin, A. (2007). The representation of object concepts in the brain, Annual Review of Psychology, 58, 25-45
Mayberry, R. I., & Nicoladis, E. (2000). Gesture reflects language development evidence from bilingual children. Current Directions in Psychological Science, 9(6), 192-196.
Müller, C. (2008). What gestures reveal about the nature of metaphor. In M. Cienki, A. J., & Müller, C. (Eds ). Metaphor and gesture (Vol. 3). John Benjamins Publishing.
McNeill, D. (1992). Hand and mind: What gestures reveal about thought (Vol. xi). Chicago, IL, US: University of Chicago Press.
McNeill, D. (2005). Gesture and thought (Vol. xii). Chicago, IL, US: University of Chicago Press.
Namy, L. L. (2001). What's in a Name When It Isn't a Word? 17-Month-Olds' Mapping of Nonverbal Symbols to Object Categories. Infancy, 2( 1), 73-86.
Namy, L. L. (2008). Recognition of iconicity doesn't come for free. Developmental Science, 11(6), 841-846.
Namy, L. L., Campbell, A. L., & Tomasello, M. (2004). The Changing Role of Iconicity in Non-Verbal Symbol Learning: A U-Shaped Trajectory in the Acquisition of Arbitrary Gestures. Journal of Cognition and Development, 5(1), 37-57.
Namy, L. L., Vallas, R., & Knight-Schwarz, J. (2008). Linking parent input and child receptivity to symbolic gestures. Gesture, 8(3), 302-324.
Namy, L. L., & Waxman, S. R. (1998). Words and Gestures: Infants' Interpretations of Different Forms of Symbolic Reference. Child Development, 69(2), 295-308.
Ozçaliskan, §., & Goldin-Meadow, S. (2011). Is there an iconic gesture spurt at 26 months. In Stam, G & ., Ishino, M. Integrating gestures: The interdisciplinary nature of gesture. Amsterdam, NL: John Benjamins.
Ozçaliskan, S., & Goldin-Meadow, S. (2009). When gesture-speech combinations do and do not index linguistic change. Language and cognitive processes, 24(2), 190.
Ozçaliskan, §., & Goldin-Meadow, S. (2005). Gesture is at the cutting edge of early language development. Cognition, 96(?), B101-B113.
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying different-modality properties for concepts produces switching costs, Psychological Science, 14(2), 119-123.
Perry, M., Breckinridge Church, R., & Goldin-Meadow, S. (1988). Transitional knowledge in the acquisition of concepts. Cognitive Development, 3(4), 359-400.
Pine, K. J., Lufkin, ?, & Messer, D. (2004). More gestures than answers: Children learning about balance. Developmental psychology, 40(6), 1059-1067.
Ping, R., & Goldin-Meadow, S. (2010). Gesturing saves cognitive resources when talking about nonpresent objects. Cognitive Science, 34(A), 602-619.
Rowe, M. L., & Goldin-Meadow, S. (2009). Early gesture selectively predicts later language learning. Developmental Science, 12(1), 182-187.
Sheya, ?., & Smith, L. B. (2010). Development through sensorimotor coordination. In J. Stewart, O. Gapenne, & E. Di Paolo (Eds.), Enaction: Towards a new paradigm for cognitive science. Cambridge, MA: MIT Press. 123-14
Singer, ?. ?., & Goldin-Meadow, S. (2005). Children learn when their teacher's gestures and speech differ. Psychological Science, 16(2), 85-89.
Smith, L. ?. & Pereira, A. (2009). Shape, action, symbolic play and words: Overlapping loops of cause and consequence in developmental process. In S. Johnson (Ed), Neo-constructivism: The new science of cognitive development. Oxford University Press
Smith, L. ?., & Sheya, A. (2010). Is cognition enough to explain cognitive development? Topics in Cognitive Science, 1-11.
Spivey, Michael J., Richardson, Daniel C., & Fitneva, Stanka A. (2004). Thinking outside the Brain: Spatial Indices to Visual and Linguistic Information. In J. M. Henderson, F. Ferreira, (Eds), The interface of language, vision, and action: Eye movements and the visual world, (pp. 161-189). New York, Psychology Press.
Stanfield, C., Williamson, R., & Ozçaliskan, S. (2013). How early do children understand gesture-speech combinations with iconic gestures? Journal of child language, 1-10.
Striano, T., Rochat, P., & Legerstee, M. (2003). The role of modelling and request type on symbolic comprehension of objects and gestures in young children. Journal of Child Language, 50(01), 27-45.
Tolar, T. D., Lederberg, A. R., Gokhale, S., & Tomasello, M. (2008). The development of the ability to recognize the meaning of iconic signs. Journal of Deaf Studies and Deaf Education, 13(2), 225-240.
Tomasello, M. (2008). Origins of human communication (Vol. xiii). Cambridge, MA, US: MIT Press.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675-735.
Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78(3), 705-722.
Tomasello, M., Striano, T., & Rochat, P. (1999). Do young children use objects as symbols? British Journal of Developmental Psychology, 17(A), 563-584.
Werner, H., & Kaplan, B. (1963). Symbol formation: An organismic-developmental approachto language and the expression of thought. New York: John Wiley & Sons, Inc.
Wilson, M. (2008). How Did We get from There to Here? An Evolutionary Perspective on Embodied Cognition in P. Calvo & A. Gomila (Eds.), Handbook of Cognitive Science: An Embodied Approach, San Diego: Elsevier.
Dermina VASC*\ Thea IONESCU
Department of Psychology, Developmental Psychology Lab,
Babes-Bolyai University. Cluj-Napoca. Romania
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Embodying Cognition: Gestures and Their Role in the Development of Thinking. Contributors: Vasc, Dermina - Author, Ionescu, Thea - Author. Journal title: Cognitie, Creier, Comportament. Volume: 17. Issue: 2 Publication date: June 2013. Page number: 149+. © A.S.C.R. PRESS Mar 2008. Provided by ProQuest LLC. All Rights Reserved.