Description
Cognitive psychologists believe that our brain not only interprets bottom-up information received from sensory input, but prior knowledge can also change what we hear in a top-down manner (Getz & Toscano, 2019). In this study, we were interested in collecting electroencephalography (EEG) data to determine how strongly top-down processing impacts the perception process. Our stimuli consisted of common word pairs (e.g., bunk beds, amusement park) in which target words were manipulated to have varying voice onset times (VOTs). The sounds /b/ and /p/ exist on a VOT continuum, with /b/ having a short VOT (voiced) and /p/ having a longer VOT (voiceless); /d/ and /t/ follow the same pattern. During the experiment, participants determined the starting sound (b, d, p, t) of the second word in each pair. The first word was either an association prime or neutral prime. We began with a behavioral pilot test, investigating how various top-down factors would affect reaction times. We varied the word frequency, neighborhood density, and lexical status of the primes. We found that responses were most impacted by lexical status, meaning participants were more likely to perceive ambiguous targets as words (rather than non-words). For associated primes, responses differed based on expected voicing (an ambiguous VOT between b/p was perceived as /b/ in the context of bunk BEDS, but as /p/ in amusement PARK). We are currently using EEG to track brain voltage fluctuations and are conducting ERP analysis to understand the time course of top-down information's influence on speech processing.
Using EEG to Understand the Effects of Top-Down Processing on Speech Perception
Cognitive psychologists believe that our brain not only interprets bottom-up information received from sensory input, but prior knowledge can also change what we hear in a top-down manner (Getz & Toscano, 2019). In this study, we were interested in collecting electroencephalography (EEG) data to determine how strongly top-down processing impacts the perception process. Our stimuli consisted of common word pairs (e.g., bunk beds, amusement park) in which target words were manipulated to have varying voice onset times (VOTs). The sounds /b/ and /p/ exist on a VOT continuum, with /b/ having a short VOT (voiced) and /p/ having a longer VOT (voiceless); /d/ and /t/ follow the same pattern. During the experiment, participants determined the starting sound (b, d, p, t) of the second word in each pair. The first word was either an association prime or neutral prime. We began with a behavioral pilot test, investigating how various top-down factors would affect reaction times. We varied the word frequency, neighborhood density, and lexical status of the primes. We found that responses were most impacted by lexical status, meaning participants were more likely to perceive ambiguous targets as words (rather than non-words). For associated primes, responses differed based on expected voicing (an ambiguous VOT between b/p was perceived as /b/ in the context of bunk BEDS, but as /p/ in amusement PARK). We are currently using EEG to track brain voltage fluctuations and are conducting ERP analysis to understand the time course of top-down information's influence on speech processing.