Plasticity refers to changes in the brain that enable an organism to adapt its behavior in the face of changing environmental demands. The evolutionary role of plasticity is to provide the cognitive flexibility to learn from experiences, to monitor the world based on learned predictions, and adjust actions when these predictions are violated. Both progressive (myelination) and regressive (synaptic pruning) brain changes support this type of adaptation. Experience-driven changes in neural connections underlie the ability to learn and update thoughts and behaviors throughout life. Many cognitive and behavioral indices exhibit nonlinear life-span trajectories, suggesting the existence of specific sensitive developmental periods of heightened plasticity. We propose that age-related differences in learning capabilities and behavioral performance reflect the distinct maturational timetable of subcortical learning systems and modulatory prefrontal regions. We focus specifically on the developmental transition of adolescence, during which individuals experience difficulty flexibly adjusting their behavior when confronted with unexpected and emotionally salient events. In this article, we review the findings illustrating this phenomenon and how they vary by individual. Keywords: adolescence, development, individual differences, learning, plasticity, reward, self-control One of the fundamental purposes of brain plasticity is to provide the capacity to adapt and optimize behavior in accordance with the current environmental demands. This capability to learn through experience to obtain reward or to avoid danger serves a clear evolutionary purpose. An organism’s fitness may critically depend on the ability to adjust behavior to maximize the likelihood of a successful hunt or, more contemporarily, a lucrative business deal. Subcortical systems have been shown to support essential, evolutionarily-conserved learning processes involving reward (Galvan et al., 2005; Pagnoni et al., 2002; Schultz et al., 1997) and threat (Delgado et al., 2008; Soliman et al., 2010). But these systems do not operate independently; rather they are part of a broad interactive network of brain regions (Casey et al., 2001). Regions of prefrontal cortex are thought to play a modulatory role within this network, by enabling the suppression of the subcortical regions involved in immediate motivationally-driven behaviors in favor of longer term goal-oriented ones (for a review, see (Somerville and Casey, 2010). The ability to adapt and optimize behavior varies as a function of age, the individual, as well as context. Age-related differences in cognitive flexibility may arise due to both maturational constraints of developing brain regions (Galvan et al., 2006), and the connectivity between interacting brain systems (Liston et al., 2006). These differences are particularly apparent during transitional periods, when previously adaptive behaviors may gradually become incompatible with a new range of experiences that arise across the lifespan. Flexibility in adjusting behavior to changing environmental demands also depends upon an individual’s unique experiential history. As such, some individuals are less able than others to flexibly suppress an inappropriate action, such as not eating cookies when dieting. That flexibility is particularly challenged in the context of highly salient cues (e.g., an ice cream sundae). This paper examines flexibility and rigidity in behavioral adaptation as environmental demands vary across development and individuals. Learning across development is a continuous interactive process shaped by both progressive (synaptogenesis, myelination) and regressive (synaptic pruning, apoptosis) neuronal changes (Brown et al., 2005; Casey et al., 2006; De Haan and Johnson, 2003; Thompson and Nelson, 2001) as the brain adapts to its environment. These plastic changes have been classified as either experience-expectant or experience dependent (Greenough et al., 1987). Experience-expectancy characterizes plasticity mechanisms that process environmental inputs common to all individuals of a species whereas experience-dependent mechanisms are sensitive to the specific inputs to which the individuals are exposed. Experience-expectant plasticity is thought to shape species-typical development while learning seems to be more reliant on experience-dependent input (Galvan, 2010; Greenough et al., 1987). As an individual interacts with the environment and experiences accumulate, skills and behaviors become gradually modularized (Johnson and De Haan, 2011; Karmiloff-Smith, 1992) or entrenched (Zevin and Seidenberg, 2004). The capacity for plastic remodeling is reduced throughout development, and the structural and functional architecture of the brain that was once heavily influenced by new experiences, becomes increasingly fixed (Feldman and Brecht, 2005; Zevin and Seidenberg, 2004). Thus, one’s development reflects not only biological constraints, but also the effects of experiential history, which interact to shape and constrain behavior. One of the essential aspects of plasticity is the capacity to adapt to changing environmental demands over time. Evolution has shaped the brain to successfully deal with the physical and social demands of the world. As these demands change with transitions through childhood, adolescence, young adulthood, and middle to late adulthood, corresponding adjustments in behavior are required. In this paper, we focus on adolescence as one such period of changing environmental demands. By definition, adolescence poses new demands, as the individual moves from dependence on parents to relative independence. This developmental transition is not specific to humans. Independence-seeking behaviors including increases in peer interactions and novelty-seeking behaviors are prevalent across species. Individuals within a species typically emigrate away from the home territory around the time of sexual maturation, in order to reduce inbreeding, as inbred offspring exhibit lower viability due to greater expression of recessive genes (Bixler, 1992; Moore, 1992; Pereira and Altmann, 1985; Schlegel and Barry, 1991). These independence-seeking behaviors serve adaptive functions (Crockett and Pope, 1993; Irwin and Millstein, 1986; Spear, 2010), increasing the probability of reproductive success, improving life circumstances, and obtaining resources necessary for survival (Belsky et al., 1991; Csikszentmihalyi and Larson, 1987; Daly and Wilson, 1987; Meschke and Silbereisen, 1997). These behaviors have been suggested to be the product of a biologically driven imbalance between heightened novelty- and sensation-seeking and immature “self-regulatory competence” (Steinberg, 2004) or ability to change behavior in accordance with environmental demands. Understanding behavioral and brain changes that impact the adolescent’s ability to adapt to these changing environmental demands is key in understanding plasticity during this period of development. Through our interactions with the world, we form predictions about what will occur based on the frequency and co-occurrence of experienced events. The rate of occurrence of events can bias our behavior such that we are faster to respond to frequent ones and slower to respond to rare ones. A stimulus becomes familiar or learned through repeated presentations, as in the classic habituation paradigm used with infants (Bornstein, 1998; Fantz, 1964) or the oddball paradigm used in electrophysiological studies (Knight, 1984). Our expectations become biased in favor of frequent or familiar events as evidenced by quicker responses or shorter looking times toward them, and longer responses to rare or novel ones. Learning to predict specific situations is essential to guide one’s thoughts and actions. When predictions are violated, the organism needs to detect theses occurrences and learn from them in order to adapt behavior accordingly to changing environmental demands (Kahneman and Tversky, 1979). Thus learning depends not only on detecting co-occurrences of events, but on processing the discrepancy between what we learn to expect and what actually happens (Rescorla and Wagner, 1972). In order to behave adaptively in a changing environment, our predictions must be updated based on experience. Neural circuitry comprised of dopaminergic neurons in the ventral tegmental area of the midbrain and their target areas in the limbic forebrain, particularly the ventral striatum and prefrontal cortex, has been implicated in biasing behavior toward predictable events (Haber, 2003; Nestler, 2004; Volkow et al., 2011). The ventral striatum is proposed to assign value to current rewards as part of the process of learning to predict future events. These value signals are broadcast to prefrontal control regions to help guide future behavior (Knutson et al., 2001; O’Doherty et al., 2002). During learning, dopamine activity shifts from rewards themselves to cues that predict rewards (Hollerman and Schultz, 1998; Mirenowicz and Schultz, 1994). Rather than indicating the presence or absence of rewards or events per se, these dopaminergic neurons are argued to encode a prediction error, signaling the difference between actual and predicted value of an event (Arias-Carrion and Poppel, 2007; Montague et al., 1996; Schultz et al., 1997). If an expected reward is not received then firing of dopaminergic neurons decreases at the time the event should have occurred. When a reward is greater than expected, the firing of dopaminergic neurons increases (Schultz et al., 1997), increasing motivation towards the cue via a learning mechanism referred to as prediction error. Thus, in order to flexibly respond to a changing environment, our brain requires the integration of stimuli, behavioral responses and feedback information that depend on particular cues and contexts. Importantly, this learning process appears to be subject to modulation by age (see Sections 3.2 and 3.3). Here, we focus specifically on developmental changes in learning that occur during adolescence. The capacity to change one’s actions in order to cope with new demands appears to develop progressively in life (Somerville et al., 2010). A challenge for developmental neuroscience is to understand the neural changes that bridge the gap between detecting changes in the environment and the actual adaptation of behavior. Predictive learning is a cornerstone of behavioral development. Knowing what events to expect when, and in which contexts, is critical for planning and maintaining contextually appropriate thoughts or actions over time. Adjusting behavior when these predictions are violated is an essential element of behavioral regulation. There is a wealth of behavioral evidence from experimental paradigms in controlled laboratory settings that show a steady improvement in the ability to suppress an inappropriate response in favor of an appropriate one from infancy to adulthood (Case, 1972; Casey, 2005; Casey et al., 2005; Davidson et al., 2006; Flavell et al., 1966; Keating and Bobbitt, 1978; Pascual-Leone, 1970). Classic examples of these paradigms include the “A not B” task in infancy (Diamond, 1985; Piaget, 1954) and Stroop (Tipper et al., 1989), card sorting (Munakata and Yerys, 2001; Zelazo et al., 1996), antisaccade (Luna et al., 2001) and go-nogo tasks (Casey et al., 1997; Luria, 1961) in childhood and adolescence. In all cases, children have a more difficult time overriding habitual or salient actions in favor of the current appropriate action with this ability gradually developing throughout early adolescence (Passler et al., 1985). Thus, behavioral development relies on the process of learning from our environment and adapting behaviors appropriately. Behavioral adaptation – the flexibility or ability to alter ones actions in order to adapt to new circumstances – improves from infancy to adulthood. However, recent studies have begun to provide elegant demonstrations of how this ability is differentially biased in motivational contexts. These studies suggest a change in sensitivity to environmental cues, particularly to those signaling potential reward, at different points in development, with a unique influence of motivational cues on behavior during adolescence (Casey and Jones, In press; Somerville and Casey, 2010) that may be consistent with experience-expectant demands. For example, using a gambling task in which reward feedback was provided immediately (“hot” trials) or after a delay (“cold” trials), Figner and colleagues (Figner et al., 2009) showed that adolescents made disproportionately more risky gambles compared to adults but only in the “hot” condition. Using a similar gambling task, Cauffman and colleagues (Cauffman et al., 2010) have shown that this sensitivity to rewards and incentives actually peaks during adolescence, as demonstrated by a steady increase from late childhood to adolescence in tendency to play with more advantageous decks of cards, followed by a subsequent decline from late adolescence to adulthood. These findings illustrate a curvilinear function, peaking roughly between 13 and 17, and subsequently declining (Steinberg et al., 2009). Recently, Somerville et al. (2011) provided evidence for a specific reduction in behavioral regulation in adolescents relative to children and adults when faced with unpredictable cues signaling appetitive value. Using a go-nogo task that contained social appetitive cues (e.g., happy faces) that facilitated approach responses, she showed developmental differences in subjects’ ability to flexibly approach or avoid positive or neutral stimuli. Specifically, adolescents had more difficulty suppressing a response to rare appetitive cues relative to neutral ones, which was not observed in children and adults (see Figure 1a). (A) Gray line represents proportion of correct hits out of total go trials; black line represents proportion of false alarms out of total no-go trials. The y-axis represents the proportion of responses for happy trials adjusted for proportion of responses for calm trials. (B) Brain regions showing differential activity as a function of age. Activations, threshold p < .05, small volume corrected, are rendered on a representative high-resolution anatomical scan. The left side of image corresponds to the left side of the brain. (C) Plot of activity in the ventral striatum (circled in panel B) response to happy faces (no-go and go conditions collapsed) relative to rest as a function of age. Adolescents show a significantly larger magnitude of activation relative to both children and adults. Copyright: MIT Press Journals. Reprinted with permission (Somerville et al, 2011). In a recent study using a probabilistic learning task, Cohen et al. (2010) examined how the expectation of an event based on prior outcomes and its reward value (e.g. low or high magnitude) impacts behavior. As expected, individuals of all ages were more accurate and quicker to react when responding to predictable stimuli. However, adolescents responded more quickly to stimuli that have been previously associated with a higher reward value. Thus at different ages, predicted outcomes and incentives may drive behavior in different ways. To better understand the behavioral changes observed across this period of development, it is essential to explore the biological changes behind them. The human brain undergoes significant changes in both its structural architecture and functional organization early in life that reflect on-going progressive and regressive processes (Casey et al., 2005). Even though total brain volume approximates that of the adult by 5 or 6 years of age, many of these developmental processes continue well into young adulthood. These developmental changes include proliferation and migration of cells mostly during fetal development (Jacobson, 1991; Rabinowicz, 1986), followed by regional changes in synaptic density (Bourgeois et al., 1994; Huttenlocher, 1979; Huttenlocher, 1990), and development of myelination (Yakovlev, 1967). The changes appear to be regional in nature in that cortical sensory and subcortical areas undergo dynamic changes like synaptic pruning earlier than higher-order cortical association areas (Huttenlocher, 1997; Rakic et al., 1986). Patterns of regional brain development is consistent with recent human anatomical magnetic resonance imaging (MRI) studies showing protracted development of gray matter in higher-order prefrontal areas that continues into young adulthood (Giedd et al., 1999) relative to subcortical regions that appear to be structurally mature by adolescence (Sowell et al., 2002). Significant regional neurochemical changes occur within this circuitry that further support nonlinear changes in brain development. Within the dopamine system (Buckley et al., 2009; Cardinal et al., 2001; Gill et al., 2010; Pasupathy and Miller, 2005), a peak in dopamine receptor density occurs in the striatum during early adolescence (Benes et al., 2000; Brenhouse et al., 2008; Galvan et al., 2005), whereas this peak does not emerge until later in the prefrontal cortex (Cunningham et al., 2008; Tseng and O’Donnell, 2007). This differential timeline in the peak receptor density in the striatum and cortex may result in an imbalance between these systems. However, how changes in dopamine systems may relate to motivated behavior is unclear, as controversy remains as to whether behavioral sensitivity to reward stems from less active or hypersensitive dopamine systems (Robinson and Berridge, 2003; Volkow and Swanson, 2003). This controversary is consistent with the mixed adolescent imaging findings of diminished or elevated ventral striatal activity in anticipation of appetitive outcomes (Bjork et al., 2010; Somerville et al., 2011). Nonetheless, neuroanatomical and neurochemical changes in dopamine-rich circuitry during adolescence coincide with differences in motivated behavior that are distinct from pre-adolescence and adulthood (Brenhouse et al., 2008; Spear, 2010). Evidence for regional neurochemical, structural and functional brain changes with development have led to a theoretical account of adolescence referred to as the imbalance model (Casey et al., 2008; Galvan et al., 2006). Similar “dual systems” models have been described (Dahl, 2004; Steinberg, 2005, 2008). According to our model, reward-related subcortical regions and prefrontal control regions interact differently as a circuit across development. Specifically, subcortical projections develop earlier than those supporting prefrontal control. This developmental imbalance results in relatively greater reliance on subcortical versus prefrontal regions during adolescence (i.e., imbalance in reliance of systems), compared to adulthood, when this circuitry is fully mature and also compared to childhood, when this circuitry is still developing. With age and experience, the connectivity between these regions is strengthened and provides a mechanism for top down modulation of the subcortically-driven reward behavior (Hare et al., 2008). These regional neuroanatomical changes across development have implications for behavior. Development of the prefrontal cortex is associated with age-related improvement in behavioral regulation (Asato et al., 2010; Astle and Scerif, 2009; Casey et al., 2007; Durston et al., 2006; Forbes and Dahl, 2010; Liston et al., 2006; Luna et al., 2010; Luna et al., 2001; Romeo, 2003), while subcortical regions within this circuitry are sensitive to novelty and reward (Bunge et al., 2002; Liston et al., 2006). Thus, sensitivity to motivational contexts during adolescence has been proposed to result from a tension between these early-emerging “bottom-up” subcortical brain regions that express exaggerated reactivity to incentive-related cues and later-maturing “top-down” cognitive control regions involved in self control. Yet, few empirical studies have examined the interaction between these two functional systems across development. One of the first studies to examine incentive-related processes across the full spectrum of development from childhood to adulthood was completed by Galvan and colleagues (Galvan et al., 2006). In an experiment modeled upon previously described nonhuman primate studies examining error-driven reinforcement learning (Cromwell and Schultz, 2003; Fiorillo et al., 2003), Galvan and colleagues manipulated the magnitude of reward outcome and examined the effects across development in dopamine-rich subcortical and prefrontal regions. They found that the subcortical region of the ventral striatum was sensitive to varying magnitudes of monetary reward (Galvan et al., 2005) and that this response was exaggerated in adolescents, relative to both children and adults (Galvan et al., 2006). A similar increased sensitivity to rewarding stimuli among adolescents is found when using a paradigm with positive social cues such as happy faces (Somerville et al., 2011; see Figure 1b and c). Differential activity in the ventral striatum to monetary rewards in adolescents relative to children and adults has been observed across a number of labs and experimental tasks (Bjork et al., 2008; Bjork et al., 2010; Ernst et al., 2005; Galvan et al., 2006; Geier et al., 2010; Van Leijenhorst et al., 2010). In contrast to this adolescent specific pattern in the ventral striatum, prefrontal regions showed protracted and more linear changes in brain activation from early childhood to young adulthood (Durston et al., 2006; Galvan et al., 2006; Somerville et al., 2011). Recently, Somerville and colleagues (2011) examined the interaction of cognitive and motivational neural systems from childhood to adulthood. Using the task described previously that contained rare social appetitive cues (e.g., happy faces) in a go-nogo task, they showed a reduction in the capacity to suppress an approach response toward a positive, appetitive social cues in adolescents. This decrement in performance during adolescence was paralleled by enhanced activity in the ventral striatum (see Figure 1b and 1c). Conversely, activation in the inferior frontal gyrus was associated with overall accuracy and showed a linear pattern of change with age for the no-go versus go trials (see Figure 2). Using a functional connectivity analysis, they show development of this frontostriatal circuitry changes with age with stronger subcortical connections during adolescence and enhanced prefrontal ones in adulthood. It is this maturation of relevant circuitry that is concluded to explain developmental differences in the behavior when suppressing an unexpected incentive-related cue. (A) Brain regions showing more pronounced activity during “Nogo” relative to “Go” trials. Activations are thresholded at p < .05, whole-brain corrected, and rendered on a representative high-resolution anatomical scan. (B) Plot of activity in the right IFG (circled in A) to Nogo relative to Go trials as a function of age. IFG is involved in maintaining task demands to adjust behavior and override response when the predicted target fails to occur. As seen here, increasing age predicts a linear decrease in IFG recruitment. (C) Plot of activity in panel A as a function of performance. Greater recruitment for successful suppression (correct Nogo) trials relative to Go trials tended to predict worse performance (greater false alarm rate). Copyright: MIT Press Journals. Reprinted with permission (Somerville et al, 2011). These findings are in part consistent with the earlier reported findings by Cohen and colleagues (Cohen et al., 2010) of adolescents responding more quickly to stimuli previously associated with a higher reward value. In that study, adolescents also showed greater ventral striatal activity to higher, unpredicted rewards compared to adults and children. Subsequent learning studies (van den Bos et al., 2012) suggest that these age differences may not be related to differences in learning signals per se but rather in how these signals can guide behavior and expectations in the immature brain (van den Bos et al., 2012; van den Bos et al., 2009). According to this interpretation, the functional connectivity between reward related regions (ventral striatum) and the prefrontal cortex is what is changing as a function of age, not learning. It is important to consider these results in the context of the environmental changes that influence the frequency and manner in which these systems are engaged. Adolescence is a period when the individual spends less time with parents and more time with peers. With this increased freedom comes new social and environmental demands. Specifically, during adolescence we must learn to regulate our own behavior rather than relying on parents/guardians to constrain our behavior as when we were children. When placed in contexts in which one must regulate our own behavior, control failures may be a product of a prefrontal regulatory system that is relatively inexperienced and thus not functionally mature. Over time, experience shapes the capacity to regulate these approach behaviors, which with time results in a state of greater balance between dynamic approach and regulatory signaling circuitries and strengthening of the ability to resist temptation. Speculation for why the brain would be programmed in such a way as to make adolescents more impulsive in motivational contexts leads us back to previously described experience-expectant demands on the organism. Several species, from rodents to primates to birds (Spear, 2000), experience a developmental period of adolescence, which is characterized by increased novelty seeking, seeking out same-age peers and fighting with parents, all of which help the adolescent transition from the home territory and dependence on the parent to relative independence. The fact that traits such as novelty seeking and experimentation were not removed by natural selection suggests that they may serve an important role during the transition from childhood to adult life. This is a time when individuals need to be adaptable and gradually switch from a sheltered life with parental guidance and resources to an independent one. Thus seeking alternative food sources so as not to deplete those at home and seeking future mates requires some degree of pull or tension away from the safety of the home. A system that is more sensitive, more easily pulled by environmental cues signaling incentives may help facilitate this demand in an adaptive way. The notion of adolescence as an adaptive period of development has received considerable public attention (Dobbs, 2011) in moving the field away from psychopathologizing adolescence to considering ways in which heightened sensitivity to environmental cues may be useful. For example, although we have highlighted how incentives can diminish behavior, they can also enhance behavior. Rewarding individuals for appropriate behavior can make them work harder and perform better than when not rewarded. Our review of adolescent development suggests a change in sensitivity to reward-based cues, suggesting that incentives may have a unique influence on behavior during the adolescent years. Recent work by Ernst and colleagues (Hardin et al., 2009; Jazbec et al., 2006) showed improved performance on an impulse control task (antisaccade task) when promised a financial reward for accurate performance on some trials but not others. The reward facilitated adolescent performance on the task more than it did for adults. Recent parallel imaging studies (Geier et al., 2010) (Teslovich and Casey, in preparation) suggest that enhanced behavioral effects with incentives are associated with increased reward-related modulation of control regions by the ventral striatum. Together, these findings suggest that immediate incentives may be used to positively alter behavior. Important when considering how adolescence may be adaptive is to highlight misconceptions about this developmental period (Casey and Caudle, in press; Reyna and Farley, 2006; Steinberg, 2008). As Steinberg notes, empirical evidence does not support the following common beliefs: “(a) that adolescents are irrational or deficient in their information processing, or that they reason about risk in fundamentally different ways than adults; (b) that adolescents do not perceive risks where adults do, or are more likely to believe that they are invulnerable; and (c) that adolescents are less risk-averse than adults” (Steinberg, 2008; page 80). In fact, there is evidence to the contrary. Tymula and colleagues (Tymula et al., 2012) show that adolescents are, if anything, more averse to clear risks than adults. Specifically, they showed that adults were more willing than adolescents to gamble when the risk was known to be high. Adolescents, in contrast, were more willing to gamble when the risks were unclear. What appears to distinguish adolescent and adult risky decisions is adolescents’ tolerance for ambiguous risks. While adolescence tolerate unknown outcomes in making risky decisions, adults tolerate this uncertainty less. This tolerance for the unknown may be biologically adaptive in taking advantage of novel experiences and new learning opportunities (Tymula et al., 2012). Adolescence is clearly a period during which the individual must rapidly adapt to changing environmental demands given the social, sexual, and intellectual challenges of this developmental period that prepare the individual for independence. Yet, even as adults we vary in the ability to control our impulses. Perhaps one of the best examples of individual differences reported in these abilities in the social, cognitive and developmental psychology literatures is the ability to delay of gratification (Mischel et al., 1989). Delay of gratification is typically assessed in 4-year-olds. The child is asked whether she/he would prefer a small reward (one marshmallow) now or a larger reward (two marshmallows) later. Children typically behave in one of two ways. They either ring the bell almost immediately in order to have the one marshmallow now (low delayer), which means they only get one, or they wait several minutes in order to receive the larger reward of two marshmallows (high delayer). This observation suggests that some individuals are better than others in their ability to control impulses in the face of highly salient incentives and that this ability can be detected in early childhood (Mischel et al., 1989). The ability to wait for reward has been demonstrated to be adaptive and may buffer individuals against the development of a variety of dispositional physical and mental health vulnerabilities, including higher body mass index and illicit substance use (Kubzansky et al., 2009; Mischel and Ayduk, 2004; Mischel et al., 1988; Rodriguez et al., 1989). The relative lifetime stability in the capacity to resist temptation or control impulses has recently been shown in a 40-year longitudinal study. The participants were individuals who were tested at age 4 in the original delay-of-gratification task and whose self-control abilities remained consistent in follow-up assessments, falling roughly in the top and bottom quartiles in delay ability. Two experiments were conducted to investigate the ability of these individuals, now in their 40s, to refrain from responding to appetitive cues. Both tasks examined impulse control - one in the presence of neutral (“cool”) cues and one containing compelling (“hot”) cues. Because marshmallows are unlikely to be as rewarding to individuals in their 40s as in their childhood, appetitive social cues (of happy faces relative to neutral and fearful faces) were used as non-targets in a go-nogo task. The results from these two tasks showed that individuals who were less able to delay gratification as children, as adults in their 40s, showed less impulse control in suppression of a response to “hot” but not to “cool” cues. Specifically, individuals who, as a group, had more difficulty delaying gratification at 4 years of age continued to show reduced self-control abilities 40 years later. These individuals exhibited more difficulty as adults in suppressing responses to positive social cues during a go-nogo impulse control task (Casey et al., 2011) (Figure 3). These findings suggest fairly robust stable differences in self-control ability even 40 years later. High and low delayers do not differ in performance on a go/nogo task when cues are “cool” stimuli (neutral facial expressions), but low delayers make more errors when the cues are “hot” (emotional faces). Experiment was conducted outside the MR scanner. Error bars denote SEM. Copyright: PNAS. Reprinted with permission (Casey et al, 2011). In the previous study, individuals who could not stop themselves at the age of 4 had similar difficulties 40 years later. These findings raise the question of what the neural basis of this ability may be. Approximately half of the 59 high- and low- delaying individuals were imaged during performance of the “hot” go-nogo task to address this question (Casey et al., 2011). The findings showed that that performance on this task was supported by dopamine-rich ventral frontostriatal circuitry. Specifically, ventral prefrontal activity was involved in accurately withholding a response. Low delayers had diminished recruitment of this region for correct “nogo” relative to “go” trials as compared to high delayers (Figure 4). However, it was the ventral striatum, which demonstrated a significant difference in recruitment between high and low delay groups, that paralleled the behavioral findings of poorer performance when suppressing a response to an appetitive social cue (Figure 5). Thus, the data suggest that stronger recruitment of cognitive control areas (ventral prefrontal cortex) and stronger inhibition of reward-related activity (ventral striatum) may underlie the relative ease with which high delayers can suppress responses to appetitive stimuli, compared to low delayers. Moreover, this finding “highlights the importance of the qualities of the stimulus people have to resist, such as its salience or allure, in modulating cognitive control ability” (Casey et al., 2011). These findings suggest that sensitivity to rewarding cues can influence an individual’s ability to appropriately adapt to changing inputs in order to appropriately suppress inappropriate actions. This adaptive control region of the prefrontal cortex may be “hijacked” by reward learning systems, rendering control systems unable to flexibly modulate behavior. Learning systems such as the striatum provide information about structure in the environment and detecting violations in these predictions, which are fed to the prefrontal cortex for integration with goals represented in working memory. The result is facilitation of top down control of behavior as well as altering of behavior in response to relevant changes in context(Amso et al., 2005; Casey, 2002; Munakata et al., 2011). Differential prefrontal recruitment for Nogo versus Go trials is more pronounced in high than low delayers. The right inferior frontal cortex was associated with correct inhibition of a response (Nogo) relative to making a correct response (Go). Left: Activation map of right inferior frontal gyrus (IFG), thresholded at P < 0.05, whole-brain corrected, displayed on a representative high-resolution T1-weighted axial image. Right: Percentage change in MR signal for Go and Nogo trials in the IFG. High delayers show greater polarization of inferior frontal gyrus response to Nogo relative to Go trials (p = 0.001). Error bars denote SEM. Copyright: PNAS. Reprinted with permission (Casey et al, 2011). Left: Activation map for the three-way interaction of task, emotion, and delay group depicting ventral striatum activity thresholded at P < 0.05, small volume corrected, displayed on a representative high-resolution T1- weighted axial image. Right: Ventral striatal response to happy Nogo trials in high and low delayers. Copyright: PNAS. Reprinted with permission (Casey et al, 2011). These brain regions are able to alter behavior via afferent and efferent connections within prefrontal cortex, which are fine-tuned with experience over the course of development in childhood. This experiential dependent fine-tuning proceeds more slowly in the less mature system as evidenced by a similar imbalance during adolescence as described earlier (Chein et al., 2011; Galvan et al., 2006; Somerville et al., 2011). Both examples show how cognitive control may be particularly vulnerable in the context of rare reward-related cues and alter behavioral flexibility of the individual. A common question from the media and lay public when presented with the results of the delay of gratification studies is whether the findings suggest that an individual who cannot wait as a child to gain two treats is sentenced to a life of poor self control and poor outcomes or whether this developmental trajectory can be altered (i.e., is it plastic)? Moreover, is it the case that low delayers can’t adjust their behavior to suppress responses to appetitive cues or is it that they have less flexibility (plasticity) in this ability but ultimately with time can suppress these responses? The latter question may be addressed by examining performance of both delay groups over time to see if the low delayers ever perform similarly to the high delayers. If we examine performance during early, middle and late trials for these participants in the 40 year follow-up study, we see that although low delayers initially perform more poorly on the impulse control task, by the middle of the experiment they appear to be performing similarly to the high delayers (see Figure 6). This flexibility in adjusting behavior to rare or emotional cues is supported by prefrontal circuitry that enables plasticity in behavior. While the sample precludes power to test a group by time interaction, it does appears that low delayers are capable of adjusting their behavior in the face of changing demands, but less flexible requiring additional practice or experience to do so. High delayers adapt earlier to the demands of the Go/Nogo task than low delayers. High delayers maintain a good performance throughout the task (i.e. low false alarm rate on Nogo trials), while low delayers have a significantly lower performance in early trials (p = 0.02) and progressively improve in later trials. False alarm rates do not differ significantly for middle (p= 0.21) and late (p=0.35) trials. In terms of the first question, it should be noted that although there is remarkable stability in self-control abilities for many individuals, just under half of the original cohort of children who performed the delay of gratification task 40 years ago show different levels of self-control now as middle-aged adults. Specifically, some children who were below average in self-control in preschool developed into adults with high self-control abilities, and some children who were above average in self-control in preschool have low self-control abilities as adults. Thus plasticity is evidenced by the change in these individual developmental trajectories underscoring the importance of both biological constraints and experiential history (e.g. gene × environment interactions) in shaping our behavior and our flexibility in adjusting behavior in the face of changing environmental demands. Plasticity is the capacity to adapt to changing environmental demands. This paper provides evidence that this ability varies by age, individual and context. Evolution has shaped the brain to appropriately deal with expected demands of the world to provide the cognitive flexibility to learn from experiences, to monitor the world based on learned predictions, and adjust actions when these predictions are violated. These demands change with developmental transitions from infancy to late adulthood, and require new adjustments in behavior at each developmental phase. By definition, adolescence requires new demands as the individual moves from dependence on parents to relative independence. This transition appears to be facilitated by a biologically driven imbalance between increased novelty- and sensation-seeking in conjunction with immature “self-regulatory competence” (Dahl, 2004; Steinberg, 2004, 2008). This imbalance is not only seen during adolescence but can vary by individuals across development as seen in a sensitivity to rewarding cues that impair an individual’s ability to appropriately suppress inappropriate actions toward them. The neural systems underlying this imbalance include dopamine rich prefrontal and subcortical regions. The prefrontal cortex is involved in flexibly adjusting behavior in response to changing environmental demands such as distracting novel or rewarding events. This region appears to be “hijacked” by reward regions, rendering control systems unable to flexibly modulate behavior. With development and experience this circuitry is fine-tuned providing top-down modulation of subcortical regions by prefrontal ones. The studies presented provide support for not only biological constraints but also experiential history in shaping our behavior and our flexibility in adjusting behavior in the face of changing environmental demands. Highlights
This work was supported in part by National Institutes of Health Grants R01 DA018879, R01 HD069178, and by National Science Foundation Grant 06-509 (BJC). Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
|