Previous research indicates that infants' prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds' action prediction. Infants' (N=49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants' predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants' action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.
Gaze following (GF), the ability to synchronize visual attention with others, is often considered a foundation of social cognition. In this study, GF was assessed while changing the space between an actor's eyes and the gaze target. This was done to address a potential confound in the gold standard GF performance test, namely the spatial bias of the actors? eye position that occurs when the actor turns the head to look at a target, offsetting the eye position from a centered position toward the attended target. Our results suggest that both 4.5 (n = 27) and 6 (n = 30)-month-old infants can follow an actor's gaze regardless of proximity. This is the first demonstration that early GF is not dependent on proximity cues, and our results strengthen previous findings suggesting that GF develops well before 6 months of age. The study was preregistered, and all data and analysis routines can be downloaded with provided links.
The development of gaze following begins in early infancy and its developmental foundation has been under heavy debate. Using a longitudinal design (N = 118), we demonstrate that attachment quality predicts individual differences in the onset of gaze following, at six months of age, and that maternal postpartum depression predicts later gaze following, at 10 months. In addition, we report longitudinal stability in gaze following from 6 to 10 months. A full path model (using attachment, maternal depression and gaze following at six months) accounted for 21% of variance in gaze following at 10 months. These results suggest an experience-dependent development of gaze following, driven by the infant's own motivation to interact and engage with others (the social-first perspective).
We assessed whether the negative association between maternal postpartum depression (PPD) and infants’ development of joint attention (gaze following) generalizes from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) to Majority World contexts. The study was conducted in Bhutan (N = 105, M = 278 days, 52% males) but also draws from publicly available Swedish data (N = 113, M = 302 days, 49% males). We demonstrate that Bhutanese and Swedish infants’ development follows the same trajectory. However, Bhutanese infants’ gaze following were not related to maternal PPD, which the Swedish infants’ were. The results support the notion that there are protecting factors built into the interdependent family model. Despite all the benefits of being raised in a modern welfare state, it seems like Swedish infants, to an extent, are more vulnerable to maternal mental health than Bhutanese infants.
Decades of research have emphasized the significance of gaze following in early development. Yet, the developmental origin of this ability has remained poorly understood. We tested the claims made by two prominent theoretical perspectives to answer whether infants gaze following response is based on perceptual (motion of the head) or social cues (gaze direction). We found that 12-month-olds (N = 30) are able to inhibit motion cues and exclusively follow the direction of others' gaze. Six- (N = 29) and 4-month-olds (N = 30) can follow gaze, with a sensitivity to both perceptual and social cues. These results align with the perceptual narrowing hypothesis of gaze following emergence, suggesting that social and perceptual cueing are non-exclusive paths to early developing gaze following.
Four-, 6-, and 11-month old infants were presented with movies in which two adult actors conversed about everyday events, either by facing each other or looking in opposite directions. Infants from 6 months of age made more gaze shifts between the actors, in accordance with the flow of conversation, when the actors were facing each other. A second experiment demonstrated that gaze following alone did not cause this difference. Instead the results are consistent with a social cognitive interpretation, suggesting that infants perceive the difference between face-to-face and back-to-back conversations and that they prefer to attend to a typical pattern of social interaction from 6 months of age.
Event-related potentials were recorded while infants observe congruent or incongruent grasping actions at the age when organized grasping first emerges (4-6 months of age). We demonstrate that the event-related potential component P400 encodes the congruency of power grasps at the age of 6 months (Experiment 1) and in 5-month-old infants that have developed the ability to use power grasps (Experiment 2). This effect does not extend to precision grasps, which infants cannot perform (Experiment 3). Our findings suggest that infants' encoding of the relationship between an object and a grasping hand (the action-perception link) is highly specialized to actions and manual configurations of actions that infants are able to perform.
This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give me gesture (experimental condition) and the same hand shape but rotated 90 degrees, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERR component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.
The current study explores the neural correlates of action perception and its relation to infants' active experience performing goal-directed actions. Study 1 provided active training with sticky mittens that enables grasping and object manipulation in prereaching 4-month-olds. After training, EEG was recorded while infants observed images of hands grasping toward (congruent) or away from (incongruent) objects. We demonstrate that brief active training facilitates social perception as indexed by larger amplitude of the P400 ERP component to congruent compared with incongruent trials. Study 2 presented 4-month-old infants with passive training in which they observed an experimenter perform goal-directed reaching actions, followed by an identical ERP session to that used in Study 1. The second study did not demonstrate any differentiation between congruent and incongruent trials. These results suggest that (1) active experience alters the brains' response to goal-directed actions performed by others and (2) visual exposure alone is not sufficient in developing the neural networks subserving goal processing during action observation in infancy.
In addition to controlling the influx of light to the retina, the pupil also reacts as a result of cognitive and emotional processing. This makes it possible to use pupil dilation as an index for cognitive effort and emotional arousal. We show how an extended version of a computational model of pupil dilation can account for pupillary contagion effects where the pupil of an observer dilates upon seeing another person with dilated pupils. We also show how the model can reproduce the effects of cognitive effort in a math exercise. Furthermore, we investigate how the model can account for different explanations for the abnormal pupil response seen in individuals with or at risk for autism spectrum disorder. The reported computer simulations illustrate the usefulness of system-level models of the brain in addressing complex cognitive and emotional phenomena.
Sixty infants divided evenly between 5 and 7months of age were tested for their knowledge of object continuity versus discontinuity with a predictive tracking task. The stimulus event consisted of a moving ball that was briefly occluded for 20 trials. Both age groups predictively tracked the ball when it disappeared and reappeared via occlusion, but not when it disappeared and reappeared via implosion. Infants displayed high levels of predictive tracking from the first trial in the occlusion condition, and showed significant improvement across trials in the implosion condition. These results suggest that infants possess embodied knowledge to support differential tracking of continuously and discontinuously moving objects, but this tracking can be modified by visual experience.
Sixty infants divided evenly between 5 and 7 months of age were tested for their knowledge of object continuity versus discontinuity with a predictive tracking task. The stimulus event consisted of a moving ball that was briefly occluded for 20 trials. Both age groups predictively tracked the ball when it disappeared and reappeared via occlusion, but not when it disappeared and reappeared via implosion. Infants displayed high levels of predictive tracking from the first trial in the occlusion condition, and showed significant improvement across trials in the implosion condition. These results suggest that infants possess embodied knowledge to support differential tracking of continuously and discontinuously moving objects, but this tracking can be modified by visual experience.
Recent work implicates a link between action control systems and action understanding. In this study, we investigated the role of the motor system in the development of visual anticipation of others actions. Twelve-month-olds engaged in behavioral and observation tasks. Containment activity, infants spontaneous engagement in producing containment actions; and gaze latency, how quickly they shifted gaze to the goal object of anothers containment actions, were measured. Findings revealed a positive relationship: infants who received the behavior task first evidenced a strong correlation between their own actions and their subsequent gaze latency of anothers actions. Learning over the course of trials was not evident. These findings demonstrate a direct influence of the motor system on online visual attention to others actions early in development.
In this study, we explored the relation of two different measures used to investigate infants' expectations about goal-directed actions. In previous studies, expectations about action outcomes have been either measured after the action has been terminated, that is posthoc (e.g., via looking time) or during the action is being performed, that is online (e.g., via predictive gaze). Here, we directly compared both types of measures. Experiment 1 demonstrated a dissociation between looking time and predictive gaze for 9-month-olds. Looking time reflected identity-related expectations whereas predictive gaze did not. If at all, predictive gaze reflected location-related expectations. Experiment 2, including a wider age range, showed that the two measures remain dissociated over the first 3 years of life. It is only after the third birthday that the dissociation turns into an association, with both measures then reflecting identity-related expectations. We discuss these findings in terms of an early dissociation between two mechanisms for action expectation. We speculate that while post-hoc measures primarily tap ventral mechanisms for processing identity-related information (at least at a younger age), online measures primarily tap dorsal mechanisms for processing location-related information.
The present study aims to investigate the interplay of verbal and nonverbal communication with respect to infants' perception of pointing gestures. Infants were presented with still images of pointing hands (cue) in combination with an acoustic stimulus. Thecommunicative content of this acoustic stimulus was varied from being human and communicative to artificial. Saccadic reaction times (SRTs) from the cue to a peripheral target were measured as an indicator of the modulation of covert attention. A significant cueing effect (facilitated SRTs for congruent compared with incongruent trials) was only present in a condition with additional communicative and referential speech. In addition, the size of the cueing effect increased themore human and communicative the acoustic stimulus was. This indicates a beneficial effect ofverbal communication on the perception of nonverbal communicative pointing gestures, emphasizing the important role of verbal communication in facilitating social understanding across domains. These findings additionally suggest that human and communicative (ostensive) signals are not qualitatively different from other less social signals but just quantitatively the most attention grabbing among a number of other signals.
An eye tracking paradigm was used to investigate how infants' attention is modulated by observed goal-directed manual grasping actions. In Experiment 1, we presented 3-, 5-, and 7-month-old infants with a static picture of a grasping hand, followed by a target appearing at a location either congruent or incongruent with the grasping direction of the hand. The latency of infants gaze shift from the hand to the target was recorded and compared between congruent and incongruent trials. Results demonstrate a congruency effect from 5 months of age. A second experiment illustrated that the congruency effect of Experiment 1 does not extend to a visually similar mechanical claw (instead of the grasping hand). Together these two experiments describe the onset of covert attention shifts in response to manual actions and relate these findings to the onset of manual grasping.
During the first year of life, infants develop the capacity to follow the gaze of others. This behavior allows sharing attention and facilitates language acquisition and cognitive development. This article reviews studies that investigated gaze-following before 12 months of age in typically developing infants and discusses current theoretical perspectives on early GF. Recent research has revealed that early GF is highly dependent on situational constraints and individual characteristics, but theories that describe the underlying mechanisms have partly failed to consider this complexity. We propose a novel framework termed the perceptual narrowing account of GF that may have the potential to integrate existing theoretical accounts.
Motor impairments are not a part of the diagnostic criteria for autism spectrum disorder (ASD) but are overrepresented in the ASD population. Deficits in prospective motor control have been demonstrated in adults and older children with ASD but have never before been examined in infants at familial risk for the disorder. We assessed the ability to prospectively control reach-to-grasp actions in 10-month-old siblings of children with ASD (high-risk group, n = 29, 13 female) as well as in a low-risk control group (n = 16, 8 female). The task was to catch a ball rolling on a curvilinear path off an inclined surface. The low-risk group performed predictive reaches when catching the ball, whereas the high-risk group started their movements reactively. The high-risk group started their reaches significantly later than the low-risk group (p = .03). These results indicate impaired prospective motor control in infants susceptible for ASD.
This research investigated infants’ online perception of give-me gestures during observation of a social interaction. In the first experiment, goal-directed eye movements of 12-month-olds were recorded as they observed a give-and-take interaction in which an object is passed from one individual to another. Infants’ gaze shifts from the passing hand to the receiving hand were significantly faster when the receiving hand formed a give-me gesture relative to when it was presented as an inverted hand shape. Experiment 2 revealed that infants’ goal-directed gaze shifts were not based on different affordances of the two receiving hands. Two additional control experiments further demonstrated that differences in infants’ online gaze behavior were not mediated by an attentional preference for the give-me gesture. Together, our findings provide evidence that properties of social action goals influence infants’ online gaze during action observation. The current studies demonstrate that infants have expectations about well-formed object transfer actions between social agents. We suggest that 12-month-olds are sensitive to social goals within the context of give-and-take interactions while observing from a third-party perspective.
We examined the hypothesis that predictive gaze during observation of other people's actions depends on the activation of corresponding action plans in the observer. Using transcranial magnetic stimulation and eye-tracking technology we found that stimulation of the motor hand area, but not of the leg area, slowed gaze predictive behavior (compared to no TMS). This result shows that predictive eye movements to others' action goals depend on a somatotopical recruitment of the observer's motor system. The study provides direct support for the view that a direct matching process implemented in the mirror-neuron system plays a functional role for real-time goal prediction.
This eye tracking study investigated the degree to which biological motion information from manual point-light displays provides sufficient information to elicit anticipatory eye movements. We compared gaze performance of adults observing a biological motion point-light display of a hand reaching fora goal object or a non-biological version of the same event. Participants anticipated the goal of the point-light action in the biological motion condition but not in a non-biological control condition. The present study demonstrates that kinematic information from biological motion can be used to anticipate the goal of other people's point-light actions and that the presence of biological motion is sufficient for anticipation to occur.
Developmental psychology and cultural evolution are concerned with the same research questions but rarely interact. Collaboration between these fields could lead to substantial progress. Developmental psychology and related fields such as educational science and linguistics explore how behavior and cognition develop through combinations of social and individual experiences and efforts. Human developmental processes display remarkable plasticity, allowing children to master complex tasks, many which are of recent origin and not part of our biological history, such as mental arithmetic or pottery. It is this potency of human developmental mechanisms that allow humans to have culture on a grand scale. Biological evolution would only establish such plasticity if the combinatorial problems associated with flexibility could be solved, biological goals be reasonably safeguarded, and cultural transmission faithful. We suggest that cultural information can guide development in similar way as genes, provided that cultural evolution can establish productive transmission/teaching trajectories that allow for incremental acquisition of complex tasks. We construct a principle model of development that fulfills the needs of both subjects that we refer to as Incremental Functional Development. This process is driven by an error-correcting mechanism that attempts to fulfill combinations of cultural and inborn goals, using cultural information about structure. It supports the acquisition of complex skills. Over generations, it maintains function rather than structure, and this may solve outstanding issues about cultural transmission. The presence of cultural goals gives the mechanisms an open architecture that become an engine for cultural evolution.
Eye tracking has the potential to characterize autism at a unique intermediate level, with links 'down' to underlying neurocognitive networks, as well as 'up' to everyday function and dysfunction. Because it is non-invasive and does not require advanced motor responses or language, eye tracking is particularly important for the study of young children and infants. In this article, we review eye tracking studies of young children with autism spectrum disorder (ASD) and children at risk for ASD. Reduced looking time at people and faces, as well as problems with disengagement of attention, appear to be among the earliest signs of ASD, emerging during the first year of life. In toddlers with ASD, altered looking patterns across facial parts such as the eyes and mouth have been found, together with limited orienting to biological motion. We provide a detailed discussion of these and other key findings and highlight methodological opportunities and challenges for eye tracking research of young children with ASD. We conclude that eye tracking can reveal important features of the complex picture of autism.
Background: Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis.
Methods: We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5).
Results: Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion.
Conclusions: This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition.
Eye tracking was used to show that 18-month-old infants are sensitive to social context as a sign that others' actions are bound together as a collaborative sequence based on a joint goal. Infants observed five identical demonstrations in which Actor 1 moved a block to one location and Actor 2 moved the same block to a new location, creating a sequence of actions that could be considered either individual actions or collaboration. In the test phase, Actor 1 was alone and sitting so that she could reach both locations. The question was whether she would place a new block in the location she had previously (individual goal) or in the location that could be considered the goal of collaboration (joint goal). Importantly, in the Social condition, the actors were socially engaged with each other before and during the demonstration, while in the Non-Social condition, they were not. Results revealed that infants in the Social condition spontaneously anticipated Actor 1 placing her block in the joint goal location more often than those in the Non-Social condition. Thus, the social context seems to allow infants to bind actions into a collaborative sequence and anticipate joint rather than individual goals, giving insight into how actions are perceived using top-down processing early in life.
Pupillary contagionresponding to pupil size observed in other people with changes in one's own pupilhas been found in adults and suggests that arousal and other internal states could be transferred across individuals using a subtle physiological cue. Examining this phenomenon developmentally gives insight into its origins and underlying mechanisms, such as whether it is an automatic adaptation already present in infancy. In the current study, 6- and 9-month-olds viewed schematic depictions of eyes with smaller and larger pupilspairs of concentric circles with smaller and larger black centerswhile their own pupil sizes were recorded. Control stimuli were comparable squares. For both age groups, infants' pupil size was greater when they viewed large-center circles than when they viewed small-center circles, and no differences were found for large-center compared with small-center squares. The findings suggest that infants are sensitive and responsive to subtle cues to other people's internal states, a mechanism that would be beneficial for early social development.
BACKGROUND: How is the perception of collaboration influenced by individual characteristics, in particular high levels of callous-unemotional (CU) traits? CU traits are associated with low empathy and endorsement of negative social goals such as dominance and forced respect. Thus, it is possible that they could relate to difficulties in interpreting that others are collaborating based on a shared goal.
METHODS: In the current study, a community sample of 15- to 16-year olds participated in an eye tracking task measuring whether they expect that others engaged in an action sequence are collaborating, depending on the emotion they display toward each other. Positive emotion would indicate that they share a goal, while negative emotion would indicate that they hold individual goals.
RESULTS: When the actors showed positive emotion toward each other, expectations of collaboration varied with CU traits. The higher adolescents were on CU traits, the less likely they were to expect collaboration. When the actors showed negative emotion toward each other, CU traits did not influence expectations of collaboration.
CONCLUSIONS: The findings suggest that CU traits are associated with difficulty in perceiving positive social interactions, which could further contribute to the behavioral and emotional problems common to those with high CU traits.
The development of children's ability to identify facial emotional expressions has long been suggested to be experience dependent, with parental caregiving as an important influencing factor. This study attempts to further this knowledge by examining disorganization of the attachment system as a potential psychological mechanism behind aberrant caregiving experiences and deviations in the ability to identify facial emotional expressions. Typically developing children (N = 105, 49.5% boys) aged 6–7 years (M = 6 years 8 months, SD = 1.8 months) completed an attachment representation task and an emotion identification task, and parents rated children's negative emotionality. The results showed a generally diminished ability in disorganized children to identify facial emotional expressions, but no response biases. Disorganized attachment was also related to higher levels of negative emotionality, but discrimination of emotional expressions did not moderate or mediate this relation. Our novel findings relate disorganized attachment to deviations in emotion identification, and therefore suggest that disorganization of the attachment system may constitute a psychological mechanism linking aberrant caregiving experiences to deviations in children's ability to identify facial emotional expressions. Our findings further suggest that deviations in emotion identification in disorganized children, in the absence of maltreatment, may manifest in a generally diminished ability to identify emotional expressions, rather than in specific response biases.
Abstract The current research examined whether young children react to inconsistencies between a speakers’ language and her knowledge or lack of knowledge about reality. Gaze behavior at the speaker was examined during two key frames: prior and post location name. Present findings demonstrate that even before the location name is spoken, the 24-month-olds (N = 122) differentiate between the scenarios in which the speaker is knowledgeable or ignorant about where the object is. Following the location name, infant gaze was largely influenced by the inconsistency of the language. That is, infants looked more at the speaker when she mentioned a location name that was inconsistent with her knowledge or lack of knowledge of the object’s transfer. The current results demonstrate that by two years children have begun to take into account other speakers’ knowledge or ignorance of an event as they process statements about reality.
We investigated the neural correlates of chasing perception in infancy to determine whether animated interactions are processed as social events. By using EEG and an ERP design with animations of simple geometric shapes, we examined whether the positive posterior (P400) component, previously found in response to social stimuli, as well as the attention related negative fronto-central component (Nc), differs when infants observed a chaser versus a non-chaser. In Study 1, the chaser was compared to an inanimate object. In Study 2, the chaser was compared to an animate but not chasing agent (randomly moving agent). Results demonstrate no difference in the Nc component, but statistically higher P400 amplitude when the chasing agent was compared to either an inanimate object or a random object. We also find a difference in the N290 component in both studies and in the P200 component in Study 2, when the chasing agent is compared to the randomly moving agent. The present studies demonstrate for the first time that infants' process correlated motion such as chasing as a social interaction. The perception of the chasing agent elicits stronger time-locked responses, denoting a link between motion perception and social cognition.
Early childhood educators’ math anxiety and its relation to their frequency of pedagogic actions was examined through a questionnaire completed by 352 participants (aged 21–65) representative of the Swedish municipality where the study was conducted. Our sample contained 189 certified preschool teachers and 163 preschool caregivers who significantly differed in their ratings reported for math teaching anxiety. Results revealed that certified preschool teachers who reported higher levels of math anxiety also reported teaching and talking about mathematics content less frequently. When controlling for certified preschool teachers’ gender and age, years of work in preschools, and whether they work only with younger (1–3), older (4–6) or with both groups of children (1–6-year-old), certified preschool teachers’ general math anxiety and math teaching anxiety predicted their reported frequency of math teaching and frequency of conversations about numbers, patterns, and geometric concepts with peak strength in gatherings, excursions and situations designed to teach mathematics to preschool children. Preschool caregivers’ math anxiety measures and their reported frequency of pedagogic actions did not display statistically significant relations. Findings showed setting specific associations between certified preschool teachers general math anxiety, math teaching anxiety and their avoidance of mathematics content, highlighting the importance of early childhood educators’ awareness of math anxiety, its nature, and consequences for teaching practices.
We investigated the relations between self-reported math anxiety, task difficulty, and pupil dilation in adults and very young children during math tasks of varying difficulty levels. While task difficulty significantly influenced pupillary responses in both groups, the association between self-reported math anxiety and pupil dilation differed across age cohorts. The children exhibited resilience to the effects of math anxiety, hinting at additional influential factors such as formal math education experiences shaping their relations to mathematics and their impact on cognitive processes over time. Contrary to expectations, no significant association between self-reported math anxiety and pupil dilation during task anticipation was found in either group. In adults, math anxiety influenced pupil dilation exclusively during the initial phase of task processing indicating heightened cognitive load, but this influence diminished during sustained task processing. Theoretical implications emphasize the need for exploring individual differences, cognitive strategies, and the developmental trajectory of math anxiety in very young children.
During the first 2 years of life, an infant's vocabulary grows at an impressive rate. In the current study, we investigated the impact of three challenges that infants need to overcome to learn new words and expand the size of their vocabulary. We used longitudinal eye-tracking data (n = 118) to assess sequence learning, associative learning, and probability processing abilities at ages 6, 10, and 18 months. Infants' ability to efficiently solve these tasks was used to predict vocabulary size at age 18 months. We demonstrate that the ability to make audio-visual associations and to predict sequences of visual events predicts vocabulary size in toddlers (accounting for 20% of the variance). Our results indicate that statistical learning in some, but not all, domains have a role in vocabulary development.
Previous research suggests that subset-knowers have an approximate understanding of small numbers. However, it is still unclear exactly what subset- knowers understand about small numbers. To investigate this further, we tested 133 participants, ages 2.6 – 4 years, on a newly developed eye-tracking task targeting cardinal recognition. Participants were presented with two sets differing in cardinality (1–4 items) and asked to find a specific cardinality. Our main finding showed that on a group level, subset- knowers could identify all presented targets at rates above chance, further supporting that subset-knowers understand several of the basic principles of small numbers. Exploratory analyses tentatively suggest that one-knowers could identify the targets 1 and 2, but struggled when the target was 3 and 4, whereas two- knowers and above could identify all targets at rates above chance. This might tentatively suggest that subset-knowers have an approximate understanding of numbers that is just (i.e. +1) above their current knower level. We discuss the implications of these results in length.
In this paper, we propose a novel model—the TWAIN model—to describe the durations of two-step actions in a reach-to-place task in human infants. Previous research demonstrates that infants and adults plan their actions across multiple steps. They adjust, for instance, the velocity of a reaching action depending on what they intend to do with the object once it is grasped. Despite these findings and irrespective of the larger context in which the action occurs, current models (e.g., Fitts’ law) target single, isolated actions, as, for example, pointing to a goal. In the current paper, we develop and empirically test a more ecologically valid model of two-step action planning. More specifically, 61 18-month olds took part in a reach-to-place task and their reaching and placing durations were measured with a motion-capture system. Our model explained the highest amount of variance in placing duration and outperformed six previously suggested models, when using model comparison. We show that including parameters of the first action step, here the duration of the reaching action, can improve the description of the second action step, here the duration of the placing action. This move towards more ecologically valid models of action planning contributes knowledge as well as a framework for assessing human machine interactions. The TWAIN model provides an updated way to quantify motor learning by the time these abilities develop, which might help to assess performance in typically developing human children.
The importance of executive functioning for later life outcomes, along with its potential to be positively affected by intervention programs, motivates the need to find early markers of executive functioning. In this study, 18-month-olds performed three executive-function tasksinvolving simple inhibition, working memory, and more complex inhibitionand a motion-capture task assessing prospective motor control during reaching. We demonstrated that prospective motor control, as measured by the peak velocity of the first movement unit, is related to infants' performance on simple-inhibition and working memory tasks. The current study provides evidence that motor control and executive functioning are intertwined early in life, which suggests an embodied perspective on executive-functioning development. We argue that executive functions and prospective motor control develop from a common source and a single motive: to control action. This is the first demonstration that low-level movement planning is related to higher-order executive control early in life.
Prospective motor control, a key element of action planning, is the ability to adjust one's actions with respect to task demands and action goals in an anticipatory manner. The current study investigates whether 14-month-olds can prospectively control their reaching actions based on the difficulty of the subsequent action. We used a reach-to-place task, with difficulty of the placing action varied by goal size and goal distance. To target prospective motor control, we determined the kinematics of the prior reaching movements using a motion-tracking system. Peak velocity of the first movement unit of the reach served as indicator for prospective motor control. Both difficulty aspects (goal size and goal distance) affected prior reaching, suggesting that both these aspects of the subsequent action have an impact on the prior action. The smaller the goal size and the longer the distance to the goal, the slower infants were in the beginning of their reach toward the object. Additionally, we modeled movement times of both reaching and placing actions using a formulation of Fitts' law (as in heading). The model was significant for placement and reaching movement times. These findings suggest that 14-month-olds can plan their future actions and prospectively control their related movements with respect to future task difficulties.
This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.
The current study is the first to investigate neural correlates of infants' detection of pro-and antisocial agents. Differences in ERP component P400 over posterior temporal areas were found during 6-month-olds' observation of helping and hindering agents (Experiment 1), but not during observation of identically moving agents that did not help or hinder (Experiment 2). The results demonstrate that the P400 component indexes activation of infants' memories of previously perceived interactions between social agents. This leads to suggest that similar processes might be involved in infants' processing of pro-and antisocial agents and other social perception processes (encoding gaze direction, goal directed grasping and pointing).