MJHelper

Location:  San Diego, CA
  • 0
  • 0
  • 0%
  • 6.2%
  • 0%
  • 6.1%
  • 0%

Biography

This person has not yet completed a bio.

EDUCATION:  B.A. from A good one BLOG:  None provided
CERTIFICATIONS:  None provided CURRICULUM VITAE:  None provided

Niches

  • Content Marketing
  • Education
  • Food & Drink
  • General
  • Health & Fitness
  • Home & Garden
  • Law
  • Love & Weddings
  • Medical
  • Men
  • Parenting
  • Real Estate
  • Science & Technology
  • Women
  • Writing & Blogging

Writing Sample

 

 

 

 

 

 

 

 

 

If You’re Happy and You Know It:

 Examining Happy Face Loopholes in Attentional Blink

MJH

 

 

 

 

 

 

 

 

 

 

 

Abstract

The visual system is literally bombarded with entirely too much sensory information than it is able to process and identify up to a level of awareness. The constant, high input rate of stimuli far exceeds the visual system’s capacity for selecting, grouping, and interpreting data. This limitation results in some stimuli not being processed completely and, in certain cases, being missed altogether. This Phenomenon is called the Attentional Blink (AB). The present research proposal specifically addresses the effect of a happy or neutral face target in an RSVP paradigm to study the temporal characteristics of visual attention towards facial expressions. Based on previous research, I hypothesize that if faces have a privileged processing status, then participants' accuracy in target-identification will be enhanced if the second target in the RSVP stream is a face, rather than a scrambled image, and should be evident within the time window AB occurs.  

  

 

 

 

 

 

 

 

 

 

 

If You’re Happy and You Know It:

 Examining Happy Face Loopholes in Attentional Blink

            The complex visual environment of today’s fast-paced world exposes the average person to an overwhelming amount of stimuli. In just the first few milliseconds of encountering a scene, the visual system is literally bombarded with entirely too much sensory information than it is able to process and identify up to a level of awareness. The constant, high input rate of stimuli far exceeds the visual system’s capacity for selecting, grouping, and interpreting data. This limitation results in some stimuli not being processed completely and, in certain cases, being missed altogether. Chun and Wolfe explain:

            What you see is determined by what you attend to. At any given time, the environment   presents far more perceptual information than can be effectively processed. [. . .]         Complexity and information overload characterize almost every visual environment

            [. . .]    To cope with this potential overload, the brain is equipped with a variety of         attentional mechanisms (Chun & Wolfe, 2001, p. 273).

In the last fifty years, with special emphasis on the previous fifteen, there has been much attention in the cognitive psychology field on the limitations and rate in which the human brain can process the abundant information it takes in visually. Additionally, much research has been devoted to discovering if emotionally salient stimuli interfere with attentional resources.

            In an effort to provide contextual basis, it is first necessary to define key terms they are used in this essay’s parameters. Perception is referred to as voluntary or involuntary intake of information through cues picked up by one’s senses. Awareness refers to what is in one’s conscious, or the state that results from perceptional data being processed into working memory. The term attention is the complex mechanism responsible for the selection of information provided by the senses (in perception) to be “graduated” to a higher level of processing and awareness. Attention, therefore, can be thought of as the necessary liaison between perception and awareness. Research agrees the attention is a selection device and that selection will occur at some point in the processing of information (Lavie, 1995; Styles, 2006); exactly where or when, however, is the starting point for a multitude of different theories. Currently, there is no consensus among scholars regarding the particular point in information processing that selection occurs.

            Attentional selection can be categorized into two types, depending on their relevance to current internal goals (top-down or endogenous control) as opposed to environmental signals (bottom-up or exogenous control). These two types of attentional selection make it possible to survive and progress in the environment by influencing perception to behaviors and responses that achieve goals and avoid danger.  

 

Overview 

            The cluttered, sensory-rich conditions that a person might encounter in a particularly stressful situation can be replicated in a laboratory setting using the Rapid Serial Visual Presentation (RSVP). During RSVP, subjects are shown a series of stimuli, sequentially presented in the same spatial location for approximately 100 ms each. Subjects are required to either report all the stimuli observed (full report) or identify pre-determined target stimuli and ignore the non-target, or distracter, stimuli (partial report; Potter & Levy, 1969). RSVP has provided researchers with a way to assess the rate at which data is analyzes and encoded in temporal attention (Chun & Wolfe, 2001). Laurence (1971) found that subjects accurately identified a single target 70% of the time in RSVP rates of sixteen images per second when the target was pre-defined either featurally (lower case or upper case letters) or categorically (animals or non-animal words).

            However, contradictory findings were observed with the introduction of a second target in the RSVP stream. By manipulating the placement of the second target (T2) in relation to the first target (T1), an interesting trend became apparent. Subjects displayed a deficit in accurate detection of the T2 if it was presented within 200-500 ms after T1 (Broadbent & Broadbent, 1987). The time difference is usually referred to by the ‘lag,’ or SOA (stimulus onset asynchronies; Chun & Potter, 2000). With every image occurring 100 ms apart, or approximately 8 to 12 items per second, a lag of 1 means the two targets are presented in succession, whereas a lag of 2 means the two targets are separated by one distracter image, etc. Broadbent & Broadbent (1987) were the first to discover the discrepancy in accurate detection rates resulting from the involvement of a second target. Trials that had a positive T1 identification predictably had a lower successful reporting of T2. While this was noticed on lags 2-5, lags 2 and 3 were particularly susceptible to the effect; lag 1 and after lag 5 remained unaffected (Potter, Chun, Banks, & Muckenhoupt, 1998). Broadbent & Broadbent’s (1987) discussion suggested that target interference resulted in a deficit of the identification process for T2.

            Raymond, Shapiro & Arnell (1992) are credited with naming this phenomenon the Attentional Blink (AB). The AB can be defined as the remarkable difference in a subject’s accurate identification of a second target in a stream of visual images if it appears in short succession to a first target. This lapse occurs because of an apparent period during processing of a stimulus where attentional neural resources seem to “compete” for space, and stimuli that closely follow the attended stimulus are unable to be recognized (Isaak, Shapiro, & Martin, 1999). In Raymond et al.’s (1992) experiment, subjects were asked to identify a single white letter (T1) amid distracters of black letters, and confirm or deny the presence of a subsequent letter “X” (T2). Raymond et. al (1992) observed attentional blink evidence when T2 occurred within half of a second behind T1. However, when subjects were instructed to ignore T1, accurate detection of T2 improved noticeably, showing that the effect resulted from attentional limitations, rather than sensory limitations, as previously suggested. Another important characteristic in the Raymond et al. (1992) study showed that T2 detection was strengthened when a blank gap appeared after T1, instead of a distracter, raising questions to be studied further about the role that distracter stimuli play in AB.

            Numerous studies have been conducted with regard to the Attentional Blink, and the effect has been obtained under a wide array of conditions and in a substantial majority of subjects. Typical results show a remarkable drop in second target identification from 87% to 30% (Chun & Potter, 1995) with the majority of errors occurring at Lag 2. Using an RSVP model, AB has been found with a variety of stimuli, including words (Barnard, Scott, Taylor, May & Knightley, 2004; Anderson, 2005), pictures, symbols, and faces. This duplicative property suggests that AB can be useful as a general tool to characterize properties of perceptual awareness.

 

Attention and Emotion

            A prominent topic concerns exceptions to the AB. Are there loopholes to the attentional blink phenomenon? For example, researchers have experimented with how the AB is affected by target significance, namely of an emotional nature. It is a widely supported and accepted view that basic human emotions have roots in the biological adaptation of our species. The processing of emotional stimuli is thought to be largely automatic, involving primitive, subcortical regions of the brain, such as the Amygdala. Patients with bilateral amygdala damage show no AB attenuation with emotional stimuli (Anderson & Phelps, 2001). This finding suggests that the amygdala may be involved in modulating early visual processes when presented with highly-charged, salient stimuli (Anderson & Phelps, 2001).The amygdala has also been shown to identify the emotional qualities and significance of a stimulus quickly (Romanski & LeDoux 1992), even before awareness. Whalen (1998) proposed heightened amygdala activity might be correlated to degree of threat conveyed by the stimulus. This is often discussed in terms of its adaptive significance; an attention system that prioritizes self-preservation-sustaining information is highly advantageous for the survival of a species (Most & Junge, 2008). Therefore, a logical argument in the scientific community points to a bias in processing that gives preferential access of attentional resources to affectively charged stimuli (Yang, Zald, & Blake, 2007).

            It appears that emotionally salient stimuli briefly capture attentional resources at the expense of other non-emotional stimuli (Pratto & John 1991), and can interfere even when they are not relevant to the current task (Vuilleumier, Armony, Driver, & Dolan, (2001). In traditional AB experiments, it has been observed that emotional distracters can engage attention so effectively that awareness of any subsequent target is impaired, even when the subject is instructed to focus solely on the single target identification and no other task (Most, Chun, Johnson, & Kiehl, 2006). This effect is referred to as “attentional rubbernecking,” or more formally, “emotional-induced blindness” (Most, Chun, Widders, & Zald, 2005a).

            There is a proposed hierarchy in the processing of positive versus negative stimuli and the order in which they are handled. Research involving the addition of emotional stimuli, especially faces, either as targets or as distracters, has been shown to modulate the Attentional Blink effect. Eastwood, Smilek, & Merikle (2003) found that it took longer for participants to count features on negative faces compared to positive or neutral faces. These findings indicate that the faces with negative expressions may capture attention faster and hold attention for a longer duration than positive expressions. A subsequent study found that neutral expressions (upright and inverted) were detected in less time than happy expressions (Yang et al., 2007). It is suggested that a neutral expression causes more deciphering due to its ambiguity, which, in turn ties up more resources from attentional capacity to process the level of potential threat. Happy faces, in turn, are globally recognized and have no blatantly harmful or dangerous connotation; therefore, happy faces are believed to incite less of a response than oppositely charged faces (i.e. fearful or angry). It is important to clarify, though, that happy faces do elicit an attenuated AB, especially in comparison to neutral, non-emotional stimuli in RSVP paradigms. Mack, Pappas, Silverman, & Gay (2002) reported that, similarly, cartoon smiley faces also survived the AB. Their findings suggest that the potency of a happy expression ca and argued that the attention capturing effect, or saliency, that happy faces naturally exhibit allow them to be processed automatically, regardless if the face is actually a cartoon image (Mack et al., 2002). In fact, some researchers have advocated the use of schematic face stimuli in preference to “real” faces, because schematic faces are less prone to potential conflicts in low-level perceptual features and familiarity (Fox et al., 2000; Juth, Lundqvist, Karlsson, & Öhman, 2005; Öhman et al., 2001). Modern research acknowledges that faces take precedence over non-faces in capturing attention, and in limited attentional resource tasks, a fearful or angry face will demand quicker recognition than a happy or neutral face.

            Different, and sometimes opposing, findings in AB experimentation have led to skepticism and controversies concerning the schematics behind and the extent to which emotion can influence attention, both behaviorally and neutrally. Behavioral and neuroimaging data suggest that emotional influences on attention are modulated by different factors such as strategy, goals, or perceptual load (Most, Chun, Johnson, & Kiehl, 2006). An example can be drawn from the RSVP paradigm. Emotional items capture more attention and induce more misses for subsequent targets only if the latter are not uniquely defined. AB attenuation results when participants can expect the same target instead of random targets, no matter how much of an emotional response it initially arouse (Vuilleumier & Huang, 2009).

 

Attentional Theories

            Humans are not passive observers who mechanically take in visual data without discrimination. Just exactly how the brain differentiates streams of stimuli, either for processing up to awareness or ignoring and discarding, is a central study in perception. Several theories explaining why AB occurs in the visual processing system have been proposed and expanded upon. Raymond et al. (1992) provided the Gating theory based on the conclusions from his earlier RSVP trials. This gating theory suggests that an AB results when attentional resources are being allotted, and essentially depleted, by T1 processing. In other words, there are not attentional resources available to T2 because they are all being used on T1. Consequently, when T2 is admitted into the visual “gate,” their features (color, shape, and meaning) are likely to be confused with T1’s features because of a lack of adequate processing. In order to identify T1 amidst potential distracters and interference from subsequent stimuli, T2 suffers from an inhibitory process at an early perceptual level, before awareness (Raymond et al., 1992).

            An alternative to this “gate” interference model of the attentional blink can be found in the so-called bottleneck models, which are bottom-up or stimulus-driven models oriented. Bottleneck models predict that there is a capacity in attentional resources that require a filter for data selection from stimuli input explain the failure to report T2s when rapidly following T1s as a failure to consolidate T2 into visual working memory. Broadbent & Broadbent (1987) proposed that an early sensor sifts through information from stimuli encountered and selects items to advance to further processing based on its physical characteristics. Any information not selected, according to Broadbent’s (1987) theory decay before awareness.

            While Broadbent's model suggests the filtering for selecting information is made early, before semantic analysis, Treisman's model designates this filter works on physical features of the message only. The crucial difference is that Treisman's filter attenuates rather than eliminates the unattended material. Treisman's model does not explain how exactly semantic analysis works. Broadbent and Treisman agree that selection of a single channel occurs at an early stage before recognition processes begin and so their models are called Early Selection Models. Selective attention requires that stimuli be filtered so that attention is directed.     

            Deutsch & Deutsch (1963) solved the problems posed by the Broadbent model in a different way to Treisman. Their model suggests that all inputs are subject to high-level semantic analysis before a filter selects material for conscious attention. Selection is therefore later because it occurs after items have been recognized rather than before as in Broadbent's model. Selection is also 'top-down' as opposed to Broadbent's and Treisman's Models which are known as 'bottom-up' in that an item, which has relevance to someone (their name, for example) or is in context, is likely to be selected. Material is identified or recognized, its relevance, value and importance weighed and the most relevant is passed upwards for conscious attention. Deutsch & Deutsch (1963) proposed a more radical departure from Broadbent's position in their claim that all inputs are fully analyzed before any selection occurs. The bottleneck or filter is thus placed later in the information processing system, immediately before a response is made. Selection at that late stage is based on the relative importance of the inputs.

            Kahneman (1973) proposed that there is a certain amount of Attentional Capacity available, which has to be allocated among the various demands made on it. On the capacity side, when someone is aroused and alert, they have more attentional resources available than when they are lethargic. On the demand side, the attention demanded by a particular activity is defined in terms of mental effort; the more skilled an individual, the less mental effort is required, and so less attention needs to be allocated to that activity. If a person is both motivated (which increases attentional capacity) and skilled (which decreases the amount of attention needed), he or she will have some attentional capacity left over. People can attend to more than one thing at a time as long as the total mental effort required does not exceed the total capacity available. In Kahneman's model (1973) allocation of attentional resources depends on a Central Allocation Policy for dividing available attention between competing demands. Attention is a central dynamic process rather than the result of automatic filtering of perceptual input and is largely top-down process as opposed to the Filter Models that suggest a bottom-up process.

A major problem often associated with Kahneman's theory is that is does not explain how the allocation system decides on policies for allocating attentional resources tasks.

            An expansion on the bottleneck theory is the two-stage model (Chun & Potter, 1995). In stage 1, all stimuli (at rates of 10 items or less per second) are rapidly processed to a preliminary level, where basic features and meanings of the stimuli are automatically registered, but not at enough of a level to be reported. At the next level, stage 2, stimuli are either given more attentional resources or discarded. Stimuli that are provided with enough resources are then incorporated into one’s working memory. It is important to note, however, that this latter stage is limited in capacity; while stage 2 is busy consolidating a target, it does not have the capacity to process a second target (Chun & Potter, 1995). Therefore, a T2 appearing in quick succession to a T1 that is being consolidated in Stage 2 will usually be missed, resulting in the AB effect. The exception of correct reporting of T2 at lag 1 suggests that both T1 and the item that immediately follows it are processed as a pair (Chun & Potter, 1995; Raymond et al., 1992). This allows such a discrepancy to exist.

            There have been convincing reports published suggesting that the automatic processing of emotional stimuli at the expense of other non-emotional stimuli is highly dependent upon the amount of attentional resources available at that time. Pessoa & Ungerleider (2004) propose that when attention is fully engaged by another task, an emotional distracter does not capture more attention than other competing neutral distracters do. Similarly, Harris & Pashler (2004) demonstrated that high attentional load can reduce the processing of emotional stimuli.

            The once-rigidly separate fields of psychology and neuroscience have seen increased interaction due to advancements made in technology that affect attention research. Now, much effort and study is concerned with producing explanations that encompass both fields’ extensive data and research related to attention, resulting in the emergence of new or expanded theories that can begin to understand and predict human behavior and psychological responses based upon neural evidence.

            With such a lack of clarity and consensus in published results regarding the use of emotional faces, primarily happy faces, and its effect on attentional delegating, and the significant advancements in knowledge and practice that could potentially result from further study of this matter, I felt warranted in the selection of this topic as a foundation for my own theoretical research.

                        An important decision that I believe will be helpful in eliminating interfering factors is the planned use of schematic faces in an RSVP paradigm instead of photos of “real” faces that could be used to discredit findings. Previous studies (Purcell et al., 1996) have supported this choice not to use “real” faces due to potential problems of determining a control. Faces are highly subjective stimuli, and certain facial features can influence bias or preference in participants, even in the most regulated of experiments. The lab will only employ elementary black and white computer rendered images of faces using variances in eyebrow positioning and mouth line/tightness to represent a happy facial expression or a neutral facial expression (i.e. bored, indifferent).             

            Consequently, I aim to extend previous research by using a schematic face version of the RSVP paradigm, which will include happy faces and neutral faces as the target stimuli, to assess the depth and temporal resolution of the AB to positive and neutral  facial expressions. If facial stimuli are processed preferentially, then a happy or neutral face should result in a reduction of the AB phenomenon when the face appears as the second target. Specifically, our main hypothesis was as follows: if faces have a privileged processing status, then participants' accuracy in target-identification will be enhanced if the second target in the RSVP stream is a face, rather than a scrambled image. This effect should be evident when there is a short interval between the two targets (approx. 200–400 ms), which corresponds to the time window of the AB.   

            Based on the theory that faces showing happy expressions are processed quicker that faces showing neutral expressions (Eastwood et al., 2003; Tang et al., 2007), I also predict that overall accuracy in target-identification will benefit if the second target (T2) in an RSVP stream is a schematic happy face rather than  a schematic neutral face. This effect should be evident within 200-400 ms after T1, which is consistent with AB methodology.           

 

Method

Participants

            Twenty-five people (equal distribution of gender and age) from the university’s psychology department will be selected from a random pool and asked to participate in two experiments. Appointments for each subject, preferably thirty minutes apart, will be set to allow enough adequate time for a relaxed, low anxiety testing session. All subjects will be required to pass a basic vision assessment to show proof of normal or corrected-to-normal vision. All participants will be naive as to the purpose of the experiment and will be compensated $5.

 

Apparatus and Stimuli

            Two schematic faces will be used as target stimuli: a positive face and a neutral face. The two faces are the same as those used in a similar study conducted by Öhman et al. (2001). The happy face will be referred to as a “positive” face when instructing participants. The term “happy” and “positive” with regard to expressions are shown to be interchangeable (Calvo & Esteves, 2005; Juth et al., 2005). I believe that by using the term “positive” and “neutral” when instructing participants, I avoid any prejudice or bias with the word “happy.” Although it is a small example, I believe it will add to the internal validity of the lab.

            The two face stimuli will differ with respect to three main features; eyebrow, eye and mouth shape. There will also be 30 different distractor stimuli, which will be comprised of scrambled faces (i.e. features of faces in random positions and orientations, resembling a scrambled face). I plan to mirror Maratos, Mogg, & Bradley’s (2007) RSVP design:

            All stimuli subtended a visual angle of 5.7° × 7.5° and were displayed on a black   background at a viewing distance of 50 cm. Stimulus presentation was controlled by Millisecond software (www.millisecond.com). Each stimulus was presented for 128.5   ms using a 70 Hz refresh rate (i.e., each image was displayed for nine screen     refreshes at a             70 Hz refresh rate resulting in a display time of 128.5 ms; these          durations were determined in pilot work and checked with an oscilloscope) (Maratos         et. al, 2007).

Procedure

            This study will examine the temporal characteristics of visual attention towards facial expressions by presenting a Rapid Serial Visual Presentation (RSVP) paradigm to twenty-five people ranging in age from 18-60 years old that are students, classified/unclassified staff, or volunteers from the university’s psychology department. Neutral letter stimuli (p, q, d, b) will be presented as the first target (T1), and faces (neutral, happy) as the second target (T2).

            The RSVP task consisted of one block of 10 practice trials and six blocks of 106 test trials (i.e., 636 test trials in total, which were presented in a single session). Test trials consisted of 156 (25%) single-target trials and 480 (75%) double-target trials. At the beginning of each trial a small circle was presented for 214 ms at the central fixation point. On double-target trials, after the central fixation stimulus, the stimulus events were as follows: an initial sequence of distractor stimuli (ranging from 4 to 8 consecutive stimuli on each trial), the first target (T1), another sequence of distractor stimuli (between 0 and 8 stimuli), the second target (T2), and then the remaining distractor stimuli (between 2 and 13 stimuli). After each RSVP stream, participants were required to make two consecutive responses to indicate (i) whether one or two face stimuli had been presented (by pressing buttons labelled 1 or 2) and (ii) the emotional expression of the last face viewed (by pressing buttons labelled P or N to indicate whether the last face was positive or neutral). Thus, participants were asked to detect T1, but not to identify its emotional content (N.B. semantic identification of T1 is not necessary to reveal the AB; Barnard et al., 2005).

            This resulted in two main trial types (i.e., two levels of the within-subject independent variable of Trial Type), which depended on the emotional content of the T2:

  1. Neutral T1–Positive T2 (positive);
  2. Neutral T1–Neutral T2 (neutral)

 

 

For each of the main trial types, the number of intervening distractors between T1 and T2 varied; so that, on each trial, there could be none, one, two, three, four, five, seven or eight intervening distractor items between T1 and T2. The primary conditions of relevance to the hypothesis were those in which there was at least one intervening distractor between T1 and T2.

            The single-target trials were the same as the double-target trials, with the exception that only one target was presented (i.e., the T1 was replaced by a distractor stimulus). Thus, the target stimulus on single-target trials was presented under the same conditions as the T2 on double-target trials (i.e., in all equivalent serial positions in the RSVP stream).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Broadbent, D. E., & Broadbent, M. H. P. (1987). From detection to identification: Response to multiple targets in rapid serial visual presentation. Perception & Psychophysics, 42,105-113.

Chun, M. M., & Wolfe, J. M. (2001). Visual attention. In B. Goldstein (Ed.), Blackwell handbook of perception (pp. 272-310). Oxford, UK: Blackwell Publishers Ltd.

Eastwood JD, Smilek D, Merikle PM (2001) Differential attentional guidance by unattended faces expressing positive and negative emotion. Percept Psychophys. 63:1004-1013.

Lawrence, D. H. (1971). Two studies of visual search for word targets with controlled rates of presentation. Perception & Psychophysics, 10, 85-89.

Mack A, Pappas Z, Silverman M, Gay R. (2002). What we see: Inattention and the capture of attention by meaning. Consciousness and Cognition, 11:488–506.

Most, S. B., Chun, M. M., Johnson, M. R., & Kiehl, K. A. (2006). Attentional modulation the amygdala varies with personality. NeuroImage, 31, 934-944.

Most., S. B., Chun, M. M., Widders, D. M., & Zald, D. H. (2005a). Attentional rubbernecking: Cognitive control and personality in emotion-induced blindness. Psychonomic Bulletin and Review, 12, 654-661.

Pratto F, John OP. 1991. Automatic vigilance: the attention-grabbing power of negative social information. J. Personal. Soc. Psychol. 61(3):380–91.

Potter, M. C., Chun, M. M., Banks, B. S., & Muckenhoupt, M. (1998). Two attentional deficits in serial target search: The visual attentional blink and an amodal taskswitch      deficit. Journal of Experimental Psychology: Learning, Memory and Cognition, 24, 979-992.

Raymond, J.E., Shapiro, K.L. & Arnell, K.M. (1992). Similarity determines the attention blink. Journal of Experimental Psychology, 7, 191-218.

Romanski LM, LeDoux JE. 1992. Equipotentiality of thalamo-amygdala and thalamocortico-amygdala circuits in auditory fear conditioning. J. Neurosci. 12:4501–9.

Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2001). Effects of attention and emotion on face processing in the human brain: An event-related fMRI study. Neuron, 30, 829-841.