Free songs
Home / Articles / ‘Any attempt to provide an adequate theory of cognition that ignores emotion is probably doomed to failure’ (Eysenck, 1995). Critically discuss

‘Any attempt to provide an adequate theory of cognition that ignores emotion is probably doomed to failure’ (Eysenck, 1995). Critically discuss


Abstract

The discussion on emotions in relation to cognition has been thoroughly investigated in the last years in the field of neuroscience and has sparked a lot of controversy in the scientific community. A wide range of theories have been proposed starting from the beginning of the last century, in the writings of William James, before psychology was even popular with a strong experimental background, and research today is expanding with the introduction and cooperation from other sciences like genetics, neurobiology, molecular biology, computer science, etc. As the time goes by, neuroscientists come about more to the conclusion that cognition and emotion are so interlinked that any attempt to separate them, simply is doomed to failure. I am going to try to discuss the different major known theories and models that have been formulated during the years, and proceeding to more advanced theoretical models, including in all models the basics of their structural and anatomical underpinnings. In addition, I am going to elaborate to some extent on the literature on emotions and psychopathology since I think that psychopathology and neurological disorders are one of the best tools of extracting information about brain processes and also some brief information about the research on implicit and explicit emotion regulation.

In essence, we can argue that we function on two levels: An automatic, implicit one that requires no cognitive resources, and a deliberative, effortful one that requires monitoring, evaluation, appraisal, attention and other cognitive resources (Gyurak, Gross & Etkin, 2011). Some authors refer to these processes as unconscious, but the term ‘unconscious’ has created a heated controversy in the scientific community, in its theoretical implications with the term ‘implicit’. Thus, it is not reliable to use the term ‘unconscious’ as a substitute for the term ‘implicit’. It is important to distinguish as well between affect and emotion. Affect is a primary response that can be assessed more easily. It can be defined as wired dispositions, probably evolutionary- based that are affected by an immediate cause-and-effect stimulus, spatiotemporal information and representations we hold from some aspects of our memory (Lewis, Haviland-Jones & Barrett, 2008). Affect is more rapid and automatic than emotion and it’s the first sensation we get, like for example, when you see a picture of starved African children, rapidly you get a sudden feeling of dread, but yet this is not enough to give rise to a full-blown more concrete emotion. Interestingly, this is what biases our decisions many times, when we react to this initial hunch, before we rationally think about it. Presumably affect is the basic biological functions that might or might not evolve into full-blown emotional concrete states (Dalgleish & Power, 1999; Lewis, Haviland-Jones & Barrett, 2008).

James formed his theory on the basis that emotions are triggered after the perception of an event (Barbalet, 1999). Thus, someone experiences the emotion of fear because she tried to run away, and not the other way around. Hence, a raised heartbeat and a dry mouth give rise to the emotion of fear. However, James theory was posed slightly different from Lange, since Lange classified all those bodily reactions as ‘emotions’ (Dalgleish & Power, 1999; Barbalet, 1999). Nevertheless, so far very little evidence support their theory and this evidence stem mostly from research on panic disorders, PTSD and other psychopathologies, where some bodily sensations trigger conditioned emotional responses, and their associated representations from episodic memory (Oschner & Barrett, 2001, Dalgleish & Power, 1998). Other than that, the theory has received much criticism by Cannon and Bard who argued that physiological responses are triggered together with emotions, and even further, that some autonomic responses are simply too slow, and require an emotion to kick-in (Dalgleish & Power, 1999). Further opposing evidence is the fact that some physiological changes occur almost always regardless of the emotion that is experienced, or the very same bodily changes take place for emotions that might be very different in nature (Dalgleish & Power, 1998). Cannon and Bard’s theory gains some credibility from research on the thalamus. Thus, in respect to the particular theory, a thalamic damage or a lesion could lead to uncontrollable stimuli reactions that affect consequently further higher-order processing. Therefore, the prefrontal areas cannot inhibit thalamic reactions, and the thalamus is abnormally outputting signals to trigger the very basic physiological responses of an emotion to the cortex and other higher (anatomically) regions that are responsible for the conscious processing and experience of emotion. So, there is not a cause-and effect relationship between emotions and bodily sensations, but rather a simultaneous manifestation. The thalamus inputs the frontal areas and the brain stem, and this is where emotion is experienced consciously together with its associated physiological responses. However, further research from later neuroscientists has downgraded the importance of the thalamus in Cannon’s theory, and other brain areas were marked as equally important; areas which form the whole limbic system as a network of different regions interacting to process emotions (Lewis, Haviland-Jones & Barrett, 2008).

A theory that was proposed in the beginning of the last century and it co-existed to an extent with Darwin’s ideas was the facial feedback hypothesis initially formulated by William James, where he focused on the bodily changes caused by an emotional reaction, and thus if you control that sensation, the accompanying emotion disappears. According to James, without a bodily response, there is no emotion felt. This idea is extended to include specifically the facial muscles, where depending on the feedback, emotions either intensify or weaken (Alam, Barrett, Hodapp & Arndt, 2008). Thus, in simple words, we must smile to feel happy and we must perform a sad grimace to experience negative emotions. This hypothesis was tested by using what we know today as Botox, where facial muscles can be temporarily paralyzed. Facial muscles involve both motor and sensory mechanisms. In this way, feedback from the muscles still has the motor component, but with Botox paralysis they lack the sensory one, and thus eliminate the emotional response (Havas, Glenberg, Glutowski, Lucarelli & Davidson, 2010). An fMRI study using Botox showed that when participants were asked to perform facial expressions, amygdalar activity was reduced.

Similarly, in another study when subjects were asked to read emotionally-charged sentences, they did it more slowly than before their Botox injection, indicating that facial muscles clearly had an effect in the modulation of emotional responses (Havas, Glenberg, Glutowski, Lucarelli & Davidson, 2010). However, research on the facial feedback hypothesis is somehow problematic, and a variety of methodologies were used to extract more reliable conclusions. For example, many subjects might expect some emotions to be appropriate based on their facial expressions or they might use some cognitive strategies to alter their emotional responses, perhaps for helping the experimenter. In another experiment, subjects were asked to either suppress or overexpress their facial expressions while watching video tapes, and thus in the first condition they wouldn’t reveal their emotions by their face to the experimenters, but in the latter condition experimenters could easily identify their emotions from their exaggerated facial responses while watching a video (Strack, Martin, Stepper, 1988).

The classic usual way of examining the facial- feedback hypothesis was by simply simulating a facial expression and then measure their respective emotional states with electrodes attached to several parts of the face. However, this technique had its limitations since it was not clear if participants could identify what exactly they are feeling (Strack, Martin, Stepper, 1988). Nonetheless, even if the facial- feedback hypothesis is correct, either in its strong or weak version as suggested by different studies, still the possibility of cognitive mediation is not ruled out, and it simply does not say anything about the cognitive processing of experiences, and if there is any cognitive mediation involved (Rutledge & Hapka, 1982; Matsumoto, 1987; Strack, Martin, Stepper, 1988). Conversely, there was a final study that tried to eliminate any possible confounds and it deviated significantly from the conventional methods where participants were not induced to displaying the appropriate facial expressions in order for their emotions to be measured. In this particular case, facial expressions should come out in their natural manner, as they ought to be. Participants were asked to hold a pen with their lips, and that would constrain them from making a smiling expression, and another condition where they had to hold the pen with their teeth, and thus forcing them to remain still in a smiling expression. In this way, participants could not shift their attention to their facial expression just like it happened with other experimental methods, or estimate the emotions they should be feeling based on those facial expressions. Also, another mediator variable that could confound the experimental results is the emotional-priming relevant categories that could be activated in a particular facial expression from episodic information activating relevant representations (Matsumoto, 1987; Strack, Martin, Stepper, 1988). Therefore, in the specific study the subjects’ attention was focused solely on their pen-holding task and thus couldn’t activate any emotional categories and representations that might be activated if their attention remained free, so that they can access emotionally-relevant categories that corresponded to a particular facial expression. Conclusively, this study replicated the former ones, demonstrating that the facial-feedback hypothesis to some degree is correct, regardless of whether subjects could recognize the emotional meaning of their facial expressions.

In the study of fear, Ledoux viewed emotions in terms of their physiological responses (autonomic responses such as blood pressure, heart rate, skin conductance, etc.) as well as other biological underpinnings (e.g., limbic system), and specifically the role of the amygdala in fear conditioning (Dalgleish & Power, 1998). He observed that fear responses could be conditioned in the form of memory. He mostly emphasized on the neural circuits of fear and primitive flight- or -fight reactions that are probably the outcome of our evolutionary heritage (Dalgleish & Power, 1999). He proposed a dual-route of emotional processing: one that directly corresponds to the amygdala via the thalamus, producing automatic-reflex motor responses and another route that involves some higher-order processing and appraisal (e.g. I see a lion, therefore I should be afraid’), and thus sending feedback back to the amygdala for a fearful response. In the second route there is more substantial processing involving prefrontal areas of the brain, the cingulate cortex , which work as short-term memory ‘buffers’, as well as representations from declarative long-term memory in the hippocampus and temporal lobe areas. Interactions between ‘working memory’ prefrontal areas and amygdala, and hippocampal regions, allow more contextual stimuli analysis, and thus can mediate and alter an emotional response (e.g., ‘I don’t have to panic; it is not such threatening after all’) (Dalgleish & Power, 1998).

Antonio Damasio added some further cognitive implications to emotions with his Somatic Markers theory. According to this hypothesis, inputs from the body bias the emotional circuitry (more specifically, the ventromedial prefrontal cortex) and eventually bias cognitive resources and cognitive processing. Several limbic areas interact with the somatosensory cortices, prefrontal areas and lower (anatomically) areas like the basal ganglia, insula, thus extending the network of brain regions required to understand emotions (Bechara, Damasio, Tranel x& Damasio, 2005; Damasio & Bishop, 1996). Therefore, Damasio’s approach helped to identify the intermediate processes between the very basic physiological responses ,up to more higher, cognitive representations of the world, showing that these intermediate processes and inputs/outputs between basic physiological functions and the emotional circuitry, mediate the final cognitive representations. This is indicated in patients with a lesion in the ventromedial prefrontal cortex where they are impaired in making complex decisions because apparently they cannot process bodily signals. Furthermore, using the famous Damasio’s IGT (Iowa Gambling Task), lesioned patients in the aforementioned brain area failed to receive any anticipatory stimuli, evaluate, and choose a good card deck from a bad one, indicating an altered sensitivity towards future rewards and punishments and probabilistic reasoning (Damasio & Bishop, 1996). Moreover, it is important to note that the IGT was a test that attempted to measure the somatic markers (feedback from body) that further would produce an emotion, so in this task they were basically learning while performing the task via their emotions. They had to rely on a learned ‘hunch’ or a gut feeling, and decide how to respond, based on the task’s demands (Damasio & Bishop, 1996; Bechara, Damasio, Tranel & Damasio, 2005).

However, we do tend to control our emotions, observe our behaviour, our physiology and we do inhibit our emotions based on our long-term goals and our intentions. Our moods as well come into play, and so many other factors that were ignored in the first approaches on emotions. The further support of cognitive theories of emotion came as well with the theory of Lazarus where he argued that cognitive appraisal is required to elicit an emotional response. Thus, there is an interpretation of an event that leads to certain physiological changes, and in turn they evoke an emotion. Lazarus includes in his theory variables like subjective goals, interests and intentions that determine our reaction to an event and the perceived emotion. Hence, there is an evaluation of stimuli; a quick assessment of a condition based on our immediate goals and actions, and that in return gives rise to an emotion.

The theory of Lazarus in actuality came as a response and as a way to consolidate the pre-existing theories that were inadequate and focused only on specific aspects while ignoring others (e.g. relied heavily on autonomic responses and biological processing or facial feedback), and because previous research has shown cases of different responses by the same stimuli or same responses by different stimuli (Griffiths, 2002). Lazarus as well tried to describe the basic relational themes that give rise to an emotion (e.g. someone feels ‘anger’ when a ‘demeaning offense against me and mine’ or ‘sad’ because he ‘experienced an irrevocable loss’). However, these basic themes that usually might elicit an emotion do not explain adequately all the semantic structures of our emotions. Furthermore, it was observed that these core themes can happen implicitly too, before the perceptual process is ended and all the information are evaluated. This implicit appraisal is demonstrated for example in cases where the emotions that are experienced contradict the subjects’ explicit evaluation of a specific situation, like for example, when they consciously know that there is nothing to be afraid of in the current condition, but yet the emotion of fear is still there, or for example in our ability to react more to negative stimuli than positive. (Griffiths, 2002; Dangleish & Power, 1999). The notion of ‘implicit appraisal’ and the ‘affective primacy’ theory of Zajonc came as a criticism to the appraisal version of Lazarus where appraisals had a more cognitive component. Zajonc’s major argument basically was that low-level appraisal, that automatic hunch, the simple reaction to a stimuli known as affect does not include any cognitive processing (Griffiths, 2002).

The cognitive component is also emphasized in Schachter-Singer Theory where there has to be a reasoning process and some cognitive appraisal that would ultimately lead to an emotional reaction. Thus, an interpretation of the incoming physiological sensations and environmental cues are necessary to trigger the appropriate emotional response at a given time (Resenzein, 1983). In addition, they inferred that the very same physiological responses occur in different emotions, strengthening their position that some cognitive mediation is necessary to label the perceived emotion (Lewis, Haviland-Jones & Barrett, 2008). Yet, it had its drawbacks and limitations as well since it ignores that emotions could be triggered in a more automatic way without any cognitive effort. It ignores additionally the fundamental role of the amygdala and other brain systems that interact to produce emotions, limiting the concept of emotions to autonomic responses, and it also neglected the fact that distinct physiological processes give rise to different emotional states (Resenzein, 1983). This argument cannot be applied to certain psychopathologies however (e.g., depression or generalized anxiety) where the individual responds in the same way to most environmental stimuli. In this model, physiological responses combined with a cognitive mediation elicit an emotion. For example, ‘I am feeling nervous and tensed’. At the same time ‘Liza is looking at me’, ‘I must be in love’. Therefore, it is the interpretation of the environment that mostly determines our emotions (Lewis, Haviland-Jones & Barrett, 2008).

Nonetheless, even when we find the term ‘appraisal’, it is still too vague and it is not clear what do they mean by ‘appraisal’ and what is actually appraised nor they mention the extent of the cognitive component in ‘appraisal’ or the extent of ‘ implicitness’. The problem is that appraisal in some models could be considered as a rapid assessment of an event, our first thought that might be more automatic in nature and not ‘cognitive’ because it became automatized through practice, and it says very little about re-thinking an event and reason over it extensively (re-appraisal), and they also ignore important contextual and personality variables. To compensate on this confusion, and to resolve the debate, multi-component models have been suggested and these models seem more effective to understanding emotions than the older models. Thus, they were formed based on the rationale that in order to understand cognition vs emotions we have to cut-down all components and study them individually; study the content and structure of appraisals along with many other factors, instead of giving vague holistic labels like ‘cognition’ and ‘emotion’ for different processes. Thus the solution is to cut them down and reduce them and study them as components individually (Dalegleish & Power, 1999).

A detailed multi-component model for example is the component appraisal model by Scherer. It is based on an overall process of the sequential synchronisation of five different components (e.g. cognitive appraisal, physiological activation, motor expression, motivational tendencies, and subjective feeling state. Emotions thus are elicited through the multilevel sequential evaluation checking of an emotional stimulus. The model is based on four types of appraisals (Grandjean, Sander & Scherer, 2008). These appraisals are related to the familiarity or novelty (relevance) of the stimulus, the significance of the stimulus and how it affects the goals and motives of the individual(implications) as well as the cognitive resources required to cope with an event (reasoning) and finally the social context (norms and values ). These types of appraisals are what register and constitute our working memory and are in interdependence with the rest subsystems: The neuroendoctine, somatic and nervous systems. Thus a change in a subsystem will affect all the other subsystems (Grandjean, Sander & Scherer, 2008).

According to Scherer’s model, different emotional contents should be differentiated by distinctive appraisal patterns determined by the combination of certain cognitive, appraisal and motivational components. That explains why some situational elicitor may evoke different emotions at different time and at different places, and why some given emotional content could be elicited by various emotion antecedents (Roseman & Spindel, 1990). Emotions are not activated by the events per se but by the evaluation of the events and the combination of different cognitive and motivational components. Similar emotional events may elicit different emotions if they are processed and evaluated in different ways, and non-similar events may elicit the same kind of emotion if they are processed by a similar appraisal pattern. This approach thus can explain a variety of contextual behaviours that before were ignored. Scherer also argues that appraisals can exist from the sensory-motor level, as a genetic wiring, or what we call as ‘biological preparedness’. Then there is the social level, with automatic learning that is vastly implicit, and then moving to higher-order processing involving consciousness that includes and cultural meaning (Grandjean, Sander & Scherer, 2008). Thus Schere’s basic idea was that there is a synchronization of multiple subcomponents, processing different kinds of information that would eventually elicit an emotion. Indeed, from a neurobiological standpoint, there are ample evidence regarding the importance of the ventromedial prefrontal cortex and the orbitofrontal cortex and the amygdala in emotional processing and that they interact to form representations. In addition, is also relatively clear that the OFC can suppress amygdalar negative reactions since they form direct connections. The OBF seems to make the actions and behavioural patterns that are elicited through emotional response by the amygdala.

There seems to be a synchronization of these neural circuits to send information and communicate. Similarly thalamus and the hypothalamus seem to synchronize with the amygdala in cases of facial threat or a more distant synchronization of thalamus and the amygdala in cases of fear-related memories (Grandjean, Sander & Scherer, 2008). In the same manner, various local and/or distant networks are synchronized in response to emotional stimuli that goes through mutli-level processing in order to create the emotional consciousness. However, the distributions of those networks that connect the amygdala to the prefrontal cortex are not yet clearly understood.

Another important model in the neuroscience of emotions is the Interactive Cognitive Subystems (ICS). According to this model, our cognitive architecture is composed by a large network of nine distributed subsystems. They all have the same structure and functions differing only in the mental codes they use to process information (Dangleish & Power, 1999). The information is distributed and shared into the whole system, since each code in each subsystem is transformed into another code that is relevant to another subsystem. The first system is related to visual, acoustic, and body-state information, and it is generally the very raw information processing from the senses, and the associated memory representations (hearing a bark and associate it with a dog, or a memory of a smell, a taste). A second system relates to language processing and relates to controlling speech and its muscle movements and the memory systems that comprise these organs (position of mouth).Another subsystem represents autonomic responses (Duff & Kinderman, 2006).

The remaining subsystems are more related to higher processing and are more ‘explicit’ in nature. They are called the’ implicational’ and the ‘propositional’ and they are referred to as the ‘central engine’ of cognition, because they deal with levels of meaning that comprises our cognition. The first one is associated with the semantic , the ‘knowing that’ representations, and the implicational deals with more abstract meanings ,including affect and familiarity, and generally involves schematic models of experience that represent the consequences of semantic information (Duff & Kinderman, 2006). A schematic theme would have an implicit meaning based on schemas and experience ,like for example,‘hopelessness’, ‘something goes wrong’, ‘I am terrible’, etc.

This information for instance can be coded and recoded in different subsystems since as I have noted, they overlap, copying/pasting information from one subsystem to another and thus this new recoding creates a new representation (Dangleish & Power, 1999). For instance, the propositional recodes into implicational information, and also a body-state code can be recoded to implicational information similarly. We need to point out here as well that these transformations from one subsystem to another are based on the shared similarities of input/output. These regularities will determine the overall pattern of information distributed in the whole system, and they will determine the pattern of representations that will be processed in the system. Thus, meanings let’s say are the outcome of the shared similarities and co-working in the lower-level subsystems. The representations that were the outcome of these low- level subsystems will affect the patterns of meanings that the individual will experience as a higher-order conscious representation. In other words, the individual learns about the world by extracting similarities and pattern-matching from what he sees or hears (those are the outcome of co-occurring and simultaneously working and interacting lower- level subsystems). Thus, we develop higher thinking representations from the semantic meanings of factual knowledge in the propositional system and we mingle them with the implicational meanings (the schematic themes I mentioned) in the implicational system. A summed code from previous subsystems gives rise to a particular meaning like for example a first subsystem learns that A goes with B, (e.g., a loud voice triggers the representation that someone will hurt you). These learned pairing of co-occurrence will then transform as a new code for the next subsystem that will be based on the former one. Even when information is incomplete, pattern-matching will make sure to fill the missing gaps (Duff & Kinderman, 2006). However, sometimes a subsystem needs to adapt to new information, make predictions and learn a new response. That is basically how we learn. However, when a subsystem has learned that A pairs with B and suddenly receives a ‘C’ response, it must adapt. Thus, in our example, a loud voice pairs with fear of abuse, but someone might be yelling threatening his partner and at the same time mentioning how much he loves her, which leads to a discrepancy and conflicting messages. Moreover, if the individual has been extremely conditioned to pair A with B, an alternative response (C) will not make much of a difference, and thus someone will not be affected from the loving words that are paired with threatening messages (Duff & Kinderman, 2006; Dangleish & Power, 1999). This is evident in forms of psychopathology where the individuals fail to assimilate new pieces of information and react accordingly. Thus, an abusive husband will still resort to violence in spite of the fact that his partner told him that he love him in the middle of their fight. Therefore, in various psychopathologies the system remains fixed into very specific patterns of processing between subsystems and thus cannot adapt to new pieces of information.

A final, up-to-date model is the SPAARS (Schematic, Propositional, Analogical and Associative Representations Model). In this model, ‘analogical’ refers to the sensory-perceptual level; visual, olfactory, auditory, body-state information (Dangleish & Power, 1998; Dangleish & Power, 1999). This information might be either semantic (in other words factual knowledge about objects, sounds, smells) or episodic (related to one’s life experiences and events). ‘Propositional’ relates to verbal universal labels, and our representations about universal abstract concepts (the meanings to we all agree about things). The schematic mode in contrast refers to our subjective representations, our schematic themes (recall previous model), and the personal subjective meaning and interpretation of a verbal label (Dangleish & Power, 1999). For example, the statement ‘ the world is bad’ on a subjective level encompasses all the representations that are associated with the word ‘bad’, and it differs to an extent from a global conceptualization ( free of any subjective meaning) of the word ‘bad’, in the statement ‘the world is bad’. Furthermore, events and interpretations of them take place mostly at this schematic level and appraisals are required, where in this model are goal-oriented processes that eventually generate different emotions. Thus there are different appraisals and different levels of appraisals that generate very different emotions (Dangleish & Power, 1999).

The schematic themes though (subjective interpretations and our personal meanings we assign to events) are what elicit emotions, and thus the propositional level (objective and abstract meaning of concepts) are not responsible for generating emotions. Finally, there is another, a more direct route to emotion, the ‘associative’ one, where emotions are elicited more automatically without higher order schematic thinking and appraisal. The appraisal in this case is much less extended, and it becomes more implicit. Much of our social behaviour for example and the continuous repetitive pairing of specific emotions with specific events make these emotions automatized and they are elicited automatically without any further elaboration of a higher-order appraisal or too much thinking involved. In neurobiological terms, the associative level and the schematic level can be related to the dual-processing routes of emotions in Ledoux’s theory, where there is a direct route to emotion involving the amygdala and another one involving higher-order processing and the frontal parts of the brain (Dangleish & Power, 1999; Davidson, Scherer & Goldsmith).

Let’s now take a look briefly how different brain regions process and modulate cognitive and emotional information. To begin with, the amygdala, searches for important information for encoding in memory and it is biased towards potential evoking stimuli. Amygdala and the basal ganglia analyse inputs for potential threats. If there is a threat, the amygdala takes over, and analyses the visual or auditory features of the stimuli (from those sensory modalities) and associates them with the relevant avoidance reactions. In contrast, if the stimulus is rewarding, the basal ganglia takes over to store the sequences of thoughts and actions that lead to that reward (Oschner & Barret, 2002). These processes are very implicit and they summarize our core affect. This core affect can turn into an emotional experience when a meaning (by activating semantic memory) in that core affect is assigned to an object, together with all the associations to that object. The basal ganglia seem to encode repetitive, reinforced behaviours and thoughts, predict what comes next and turns these reinforcing thoughts that go together with the relevant actions as habitual behaviour. This structure receives inputs from the temporal and parietal lobes, encoding in this way the spatial characteristics of a stimulus, as well as encoding spatial and object working memory and motor control through motor centres, and at the same time is connected to the amygdala, the ACC, and the orbitofrontal cortex. Lesions to parts of the basal ganglia have shown to impair facial emotional expressions and emotional intonation. Impaired basal ganglia can cause a lack of response to positive, repetitive, reinforcing stimuli (Oschner & Barret, 2002).

Most of our knowledge that we use to judge emotional stimuli and events comes from organized knowledge structures that distinguish the meaning between different stimuli. These schemas assign the meaning of a situation related to our immediate plans and goals, our bodily sensations and the actions we take to reduce or enhance our emotional experience. They form our abstract representations from cultural conditioning and interpersonal episodic experiences, for example, young children learn easily the associated psychological states of a facial expression and the prototypical reactions accompanying a psychological state. As the time passes these repetitive episodic representations become semantic knowledge about the possible things that are related to emotional experience and their contextual information. In general, semantic knowledge generates emotions and it combines different memories that are associated with a particular emotion. Thus, in this way we label our different emotional states in order to understand them (Oschner & Barret, 2002).

Evaluation of emotional experiences is mainly the job of the anterior cingulate cortex and it involves many types of controlled processing. The ACC, in general, mediates frontal lobes’ functioning and is connected as well with subcortical areas like the amygdala, the hippocampus, and also areas that are related to attention and motor control. The ACC monitors behaviour and external rewards, or states related to uncertainty, trial-and-error, and expectancy, that eventually drive the individual to change his behaviour. Conversely, these evaluations and monitoring are important for emotional experience since they can turn a simple core affect to a discrete emotion through the use of emotional knowledge related to that core affect. The ACC transforms that core affect to an emotion due to a violated expectancy, and that an emotional response (stemming from that automatic knowledge) needs to be altered. At the same time, the ACC not only tries to change a behavioural response but also tries to evaluate the possible causes that led to that change (Oschner & Barret, 2002).

The OFC and VMPFC come into the scene by computing the affective value of a stimulus and taking into account the reinforcing situation and contextual information, and for example, that one stimulus that used to be positive now is negative or the opposite. These processes are quite important in decision- making and in changing the emotional response in order to adapt to the environment. In addition, emotional knowledge from semantic memory is important as well, since it needs to be deliberatively accessed, in order to assess and label different emotions, and thus alter them, because it is that information that helps us interpret the meaning of our affective responses, and thus by changing the meaning, you can amend your subsequent behaviour. Finally, the take-home message is that once an emotion has been inhibited or appraised, a new emotional response takes place by changing the input to the emotional network responsible for emotional processing, and thus the output (our emotional response) is changed as well (Oschner & Barret, 2001).

Over the last decades research was mostly oriented to explicit emotional regulation. Maybe it’s because it’s easier to measure it. More recently however, research has been focused on implicit emotions as well and their regulation. Emotional regulation can be defined as goal-oriented and as a result the duration, intensity, or type of emotion is altered (Dangleish & Power, 1998; Gyurak, Gross & Etkin, 2011). Examples of explicit emotional regulation are illustrated in experiments where negative and positive photos are shown, and participants have to actively reappraise and change the way they think about the emotional stimuli in order to decrease negative feelings, or even suppress their emotions so that others can’t notice how the participants are feeling. Reappraisal and suppression are two different techniques for controlling emotions (Gyurak, Gross & Etkin, 2011). Reappraisals include cognitive and linguistic strategies to reformulate and redefine the meaning of an emotional event. Suppression on the other hand, by definition, is the lack of expression of emotive responses (verbal utterances, gestures, facial expressions), even though there are strong sympathetic arousals (Goldin, McRae, Ramel & Gross,2008). Suppression is a more late control technique than other forms of emotion controlling, and even though it controls and suppresses overexpressed emotions, it does not alter emotional reactivity just like it happens with reappraisal (Goldin, McRae, Ramel & Gross,2008; Oschner & Barret, 2001).

On the other hand, in the implicit control research, some experimental tools are required to measure implicit processes ,like for example, the infamous ‘stroop task’. Moreover, in an experiment, photos of facial expressions (fearful or happy) where presented with the words sad or happy attached to them. Participants had to point out whether the face they see is happy or sad or neutral. Furthermore, in some cases, the facial expressions would be the opposite of the word written on them, and thus their reaction to that incongruence (congruency effect) would be measured. In this case, subjects have to regulate somehow their emotional response in order to drive their attention to the facial expression of the task photos to answer them correctly. Using neuroimaging techniques it was found that this type of emotional regulation is mediated between the ACC and the medial PFC and a reduction in the amygdala (indicating that the emotional reaction has been minimized) (Dillon, Ritchey, Johnson & Labar, 2007).

Eventually, we do seem to regulate our emotions implicitly as well on a daily basis, not only in experiments and actually much of our regulation is an interaction between implicit and explicit processes in spite of the fact that we think that we only function either on one mode or another. Interestingly, repetitive explicit regulation becomes gradually an implicit one and more automatic. So, for example, someone mind remind himself that his girlfriend had a bad day, and thus tries to suppress or regulate his emotional reactions in order not to create any further tension. Gradually, if this turns into a habit, that regulation and reappraisal (reminding to one’s self) will be more implicit (Gyurak, Gross & Etkin, 2011). It is like playing the piano, where the process through training becomes more automatic. This regulation strategy has been supported by evidence to be more beneficial in the long-run and people who use this technique have better interpersonal relationships and better health, and less intense emotional experiences from those who use the suppressing strategy. Furthermore, our goals and belief-systems can affect implicitly our behaviour and most of the times we need to verbally articulate them and give them linguistic labels in order to put them in a context and understand them (Oschner & Barret, 2001).

Automatizing our goals and intentions can be a successful technique for emotion regulation. In other words, when we have a goal in our mind, we can adjust our emotions accordingly even when we are not used to responding in that way. In an experiment, when participants were goal-oriented and they repeatedly carried out their goal-seeking instructions, they managed to overcome their emotions and were less reactive to threatening images, implying that because they had a long-term goal in their minds that needed to be accomplished, the importance of the goal was sufficient enough for them to overcome their fearful reactions, and minimize them for the sake of the goal. In the same manner, psychological well-being has been associated as well with error-adjustment (how well we can adjust in conflicting emotional stimuli and to adjust a cognitive control after a failure) by assessing reaction and slowing times during an error. Again in these processes there is an interaction of cognitive areas like the ACC and emotional areas, like the amygdala. However, it is not clear whether implicit regulation helps just as much as explicit regulation to regulate our emotional responses or how explicit regulation can improve implicit and the opposite, and also it is not clear either how implicit regulation works in non-experimental settings where attention is not deliberately modified in order to assess implicit processing (Goldin, McRae, Ramel & Gross, 2008; Oschner & Barret, 2001; Gyurak, Gross & Etkin, 2011).

Implicit regulation deficits have been linked significantly with individuals suffering from GAD. In general, research reveals that many mood or anxiety disorders are linked more to deficits in implicit regulations of emotions. It is important to note here that with implicit measures anxious patients seem to be more biased to threatening information and depressed people are more biased to emotional ones ( Lewis, Haviland-Jones & Barrett, 2008). Thus, it seems that emotions seem to constrain attentional resources and reduce cognitive processing of important information. Sometimes though, the opposite can take place as well; Emotions enhance cognitive resources and cognitive processing. Eventually, this discrepancy can be resolved if we take into consideration the levels of emotional arousal and the attentional load thus influences cognitive resources in each different condition (Goldin, McRae, Ramel & Gross, 2008). Moreover, it has been shown that interfering with the emotional generation process by directing attention from a negative stimulus to a neutral one has been successful in reducing negative symptoms in patients with emotional disorders. Furthermore, depressed people tend to be implicitly biased towards a negative interpretation of a stimulus and thus this worsens their symptoms (Goldin, McRae, Ramel & Gross, 2008). This finding is important too in the context of psychotherapy and CBT related to automaticity and implicit processing, since one of the purposes of the therapist is to detect and challenge these negative automatic thoughts that give birth to further complex negative schemas.

Finally, we can conclude summing up all these findings and research that any attempt to separate cognition from emotion is indeed doomed to failure. We have seen the various theories formed during the years together with more latest advanced multi-processing theories and we can see there was an extreme difficulty to define appraisal and cognition or to define implicitness and automaticity and where emotion ends and cognition begins or vice versa since some theorists underestimated let’s say cognitive higher- order appraisal ignoring the nature of appraisals in relation to goals and motives and reducing it to autonomic and automatic responses that in some theories included as some form of cognitive mediation and others overextended the cognitive component assuming that emotions are way distinct from cognition where in reality some emotions could be generated more implicitly that they thought and thus cognition should be reduced. Since the brain functions as distributed networks there is no point of separating cognition from emotion and the exact relationship between emotions and cognitive processes and how they interact remains unclear. The most reliable way of understanding these interactions is by isolating and reducing each component individually and examining emotional expression in many different circumstances and different conditions that involve different types of cognitive mediations taking as well into consideration all factors that is humanly possible. Thus, there is no clear answer apart from multiple phenomena that always depend on circumstantial variables.

 

 References

Barbalet, M. J. (2001). William James’ Theory of Emotions: Filling in the Picture. Journal for the Theory of Social Behaviour, 29:3, 0021–8308. Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/1468-5914.00101/pdf

Barrett, F. L., Ochsner, N. K. (2001). A Multiprocess Perspective on the Neuroscience of Emotion. In T. J. Mayne & G. Bonnano (Eds.), Emotion: Current issues and future directions (pp. 38-81).Retrieved from http://www-psych.stanford.edu/~ochsner/pdf/Ochsner_multiprocess_ERG.pdf

Beschara, A., Damasio, H., Tranel, D., Damasio, R. A. (2005). The Iowa Gambling Task and the somatic marker hypothesis: some questions and answers. Trends in Cognitive Sciences, Vol.9, No.4. doi:10.1016/j.tics.2005.02.002

Dalgleish, T., Power, J. M. (1999). Handbook of Cognition and Emotion. John Wiley & Sons.

Damasio, R. A., Everitt, J. B., Bishop, D. (1996). The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex [and Discussion]. Philosophical Transactions: Biological Sciences, Vol. 351, No. 1346, pp. 1413-1420. Retrieved from http://emotion.caltech.edu/dropbox/bi133/files/Domasio%20&%20Bishop.pdf

Davidson, J. R., Scherer, K., Hill, H. G. (2003). Handbook of Affective Sciences. Oxford: Oxford University Press. Retrieved from http://www.mrc-cbu.cam.ac.uk/research/emotion/cemhp/documents/dalgleish_information_processing_appraoches_to_emotion.pdf

Dillon, G. D., Ritchey, M., Johnson, D. B., Labar, S. K. (2007). Dissociable Affects of Conscious Emotion Regulation Strategies on Explicit and Implicit Memory. Emotion, 7, 354-365. DOI: 10.1037/1528-3542.7.2.354

Duff, S., Kinderman, P. (2006). An Interacting Cognitive Subsystems Approach to Personality Disorder. Clinical Psychology and Psychotherapy, 13, 233–245. DOI: 10.1002/cpp.490

Grandjean, D., Sander, D., Scherer, R. K. (2008). Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Consciousness and Cognition, 17, 484–495. doi:10.1016/j.physletb.2003.10.071

Griffiths, E. P. (2004). Towards a ‘Machiavellian’ Theory of Emotional Appraisal. Retrieved from http://paul.representinggenes.org/webpdfs/Griff.Gray.04.DSPerspective.pdf

Gyurak, A., Gross, J.J., Etkin, A. (2011). Explicit and implicit emotion regulation: A dual-process framework. Cognition and Emotion, 25 (3), 400412. DOI: 10.1080/02699931.2010.544160

Havas, D. A., Glenberg, A. M., Gutowski, K. A., Lucarelli, M. J., Davidson, R. J. (2010). Cosmetic Use of Botulinum A-Toxin Affects Processing of Emotional Language. Psychological Science. Retrieved from http://www.clarkfreshman.com/wp-content/uploads/2010/02/Havas-botox-and-emotional-language.pdf

Lewis, M., Haviland-Jones, M. J. (2008). Handbook of Emotions. New York: Guilford Press.

Matsumoto, D. (1987). The Role of Facial Response in the Experience of Emotion: More Methodological Problems and a Meta-Analysis. Journal of Personality and Social Psychology, Vol. 52, No. 4, 769-774. DOI: 10.1037/0022-3514.52.4.769

Ochsner, N. K., Barrett, F. L. ( 2001). A Multiprocess Perspective on the Neuroscience of Emotion. In T. Mayne & G. Bonnano (Eds.), Emotion: Current Issues and Future Directions. New York: Guilford Press.

Power, M., Dalgleish, T. (2007). Cognition and Emotion: From Order to Disorder. UK: Psychology Press.

Reinsezsein, R. (1983). The Schachter Theory of Emotion: Two Decades Later. Psychological Bulletin, Vol. 94, No. 2, 239-264. DOI: 10.1037/0033-2909.94.2.239

Roseman, J. I., Spinder, S. M., Jose, E. P. (1990). Appraisals of Emotion-Eliciting Events: Testing a Theory of Discrete Emotions. Journal of Personality and Social Psychology, Vol. 59, No. 5, 899-915 DOI: 10.1037/0022-3514.59.5.899

Strack, F., Stepper, S., Martin, L. L. (1988). Inhibiting and Facilitating Conditions of the Human Smile: A Nonobtrusive Test of the Facial Feedback Hypothesis. Journal of Personality and Social Psychology, Vol. 54, No. 5, 768-777. DOI: 10.1037/0022-3514.54.5.768

 

Leave a Reply

Scroll To Top