Speaker: Suzanne V.H. van der Feest (CUNY Graduate Center)
Title: Effects of Speaking Style and Context on Online Word Recognition in Adults and Young Children
Abstract: Previous research has found that Clear Speech (CS) improves intelligibility for adults (e.g. Smiljanic & Bradlow, 2009) and that Infant Directed Speech (IDS) aids perception and development for children (e.g. Cooper & Aslin, 1994). These listener-oriented speaking styles are similar, but differ in terms of exact acoustic characteristics and (positive) affect. It is not known whether listener-oriented speaking styles enhance intelligibility in general, or if only IDS features are beneficial to young children. Additionally, listeners rely on contextual information (lexical, semantic, and syntactic) when decoding a message (Nittrouer & Boothroyd, 1990). This project investigates how clarity of the speech signal interacts with availability of contextual cues for adults as well as younger listeners, by looking at time course differences in online word recognition for different speaking styles and semantic context.
-In Experiment 1a and 1b, young adult listeners were tested in an intelligibility-in-noise task and a pleasantness rating task, to establish a baseline of perception of the target sentences and speaking styles.
-In Experiment 2 and 3, 54 young adult listeners participated in an online visual word recognition experiment. They heard sentences with a high- versus low-predictability semantic context, 'He pointed at the cheese' vs. 'Mice like to eat cheese' (Fallon et al., 2002), while viewing two pictures on a screen: a target that matched the last word of the auditory stimulus and a distractor. Participants heard target sentences in Conversational (Conv), IDS, and CS speech. All sentences were presented either in quiet, or mixed with speech-shaped noise at a -5dB SNR. Results showed that - surprisingly - IDS provided similar perceptual benefits to adult listeners as CS. Relative to low-predictability Conv baseline, IDS and CS increased speed of word recognition for high-predictability sentences, in quiet and in noise equally. However, in the quiet condition lexical access was eventually facilitated by contextual cues even in Conv, but listeners in noise reliably focused the target only when a combination of contextual cues and exaggerated acoustic-phonetic cues was available.
-In Experiment 4, 18 4-year-olds were tested in a visual word recognition paradigm. Results showed that young children benefitted from contextual cues within each speaking style. Unlike for adults, in low-context sentences both CS and IDS improved word recognition compared to Conv: Children benefited from speech clarity even in the absence of contextual cues, and they benefited from adult-directed as well as infant-directed listener-enhanced speech. Participants in Experiment 4 (who were not explicitly instructed to look at a target) had relatively low overall target-fixations.
-In Experiment 5 (ongoing, n=6), we test younger 2- and 3-year-olds in a similar paradigm, but removed the contextual factor to more directly investigate whether young children benefit from adult-directed listener-enhanced speech (CS) compared to IDS. Participants hear direct instructions to fixate the target object within a neutral carrier phrase, e.g. 'Oh look, what do you see now? Look at the cheese'. Target fixations will be compared on Conv, IDS and CS sentences and analyzed with the same MELR model as Experiment 4.
Findings from Experiments 1-4 show that a combination of semantic cues and listener-oriented acoustic enhancements enables the most reliable and rapid lexical access in young children, and that adult-directed CS may be as beneficial as IDS despite differences in acoustic characteristics and affect. The findings suggest that children rely on bottom-up (sensory) processing more heavily in word recognition of low-context sentences compared to adult listeners in previous studies, as we explore further in Experiment 5. For young children, speech clarity may be even more crucial for reliable word recognition.
Thursday, February 22, 2018
4:15 P.M. Room 6417
All are welcome!