Research Labs

The Speech-Language-Hearing Sciences has multiple research labs to explore and even participate in, select one of the following to jump to laboratory details:

View the floor map of the SLHS offices

Audiology & Auditory Evoked Potentials Laboratory

Research in this laboratory focuses on behavioral and neurophysiologic processing of auditory information, especially speech.

Lab Meetings: Wednesdays 9:30 - 11:00 (bi-monthly)
Location: rm. 7401 & 7399

Mission Statement:

Research in this laboratory focuses on behavioral and neurophysiologic processing of auditory information, especially speech. Work in this laboratory examines:

the development of the acoustic change complex as an electrophysiologic tool for the assessment of speech perception capacity in  infants and young children

maturation of neurophysiologic and behavioral processing of sound in adverse listening conditions in adults and in children

how processing of speech is altered with hearing loss and the extent to which processing is remediated with amplification (hearing aids, cochlear implants)

brain-behavior relationships

This research bridges basic mechanisms and clinical application.

Principal Investigator:

Brett Martin, Ph.D. 
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309

Phone: 212-817-8838
Fax: 212-817-1537
email: bmartin@gc.cuny.edu

Faculty  
Brett Martin
Brett Martin
Ph.D. Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Reethee Antony, MPhil, CCC-SLP is currently completing her doctoral program at the CUNY Graduate Center. She is an Assistant Professor in the Speech-Language Pathology Department at Misericordia University. Her research interests include the neurophysiologic processing and encoding of speech sounds in native and non-native listeners. She was the recipient of the Martin Gitterman Excellence in Teaching Award in 2016. She received her bachelor’s and master’s degrees in Audiology and Speech-Language Pathology from Sri Ramachandra Medical College and Research Institute, Chennai, India.
Headshot for the Speech-Language-Hearing Sciences program
Hyungi Chun is a doctoral student at CUNY Graduate Center and she joined in AAEP lab in 2017. She completed her B.S. and M.S. in Audiology at Hallym University in South Korea. Prior to joining the doctoral program, she worked as a clinical audiologist & researcher in Soree Ear clinic and Hallym University of Graduate Studies in South Korea. She is interested in primarily auditory temporal processing in speech perception and how the human brain encode speech in normal hearing and hearing impaired listeners.
Headshot for the Speech-Language-Hearing Sciences program
Lisa Goldin, is a cochlear implant audiologist at the New York Eye and Ear Infirmary of Mount Sinai. Ms. Goldin has been working with individuals with cochlear implants since 2002. She received her Master of Audiology degree from Hunter College and is currently pursuing her Doctor of Philosophy degree at the Graduate Center of the City University of New York. Ms. Goldin’s doctoral research involves using central auditory evoked potentials to examine speech perception abilities in infants. Ms. Goldin is licensed in audiology and hearing aid dispensing through the New York State Department of Education and holds a Certificate of Clinical Competence in Audiology (CCC-A) from the American Speech Language Hearing Association (ASHA).
Headshot for the Speech-Language-Hearing Sciences program
Ashley McCaig, B.S. is a doctoral student at The Graduate Center- CUNY. She joined the Audiology and Auditory Evoked Potentials (AAEP) lab in the Fall of 2017 after completeing a bachelor of science in chemistry from Pace University, NYC. During her time in the AAEP lab, Ashley received the CUNY Science Fellowship. Her research interests include speech perception in infants and young children. Specifically, she is interested in infant directed speech and how talkers modify speech while addressing infants. Upon completion of her Ph.D., Ashley also plans to obtain a clinical doctorate in audiology (Au.D.)
  Chanie Monoker is Developmental Audiologist in Lakewood, New Jersey.
She is a native New Yorker and received both her BA and MS from CUNY.  She currently is a PhD candidate at the CUNY Graduate Center.  Chanie has been awarded a number of grants for her work on the effects of otitis media on auditory processes, including CUNY's DSRG and ASHA's SRG and ARTA awards, which enabled her to present her pilot data at the ASHA convention 2017.
  Derek Petti, Au.D. completed his audiology capstone project in this laboratory and will continue in the laboratory for his Ph.D. Topic TBD.
Headshot for the Speech-Language-Hearing Sciences program
Polina Shuminsky, Au.D., is a Ph.D. candidate in Speech-Language-Hearing Sciences at the CUNY Graduate Center. She is interested in investigating various aspect of auditory rehabilitation for older adults with hearing loss. She has also been practicing audiology since graduating with masters from Brooklyn College in 1999 and holds a clinical doctorate degree in audiology since 2005.
Au.D. Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Daria Collins, B.S., received her bachelor's degree from the University of Delaware in Cognitive Science with a concentration in Speech-Language Pathology. Her research interests include studying the speech processing of children with and without hearing loss.
Headshot for the Speech-Language-Hearing Sciences program
Erika Lanham, B.A., is a fourth year Doctor of Audiology (Au.D) student, currently completing her residency year at Audiology Central. She received her Bachelor of Arts degree in Communication Sciences and Disorders and Linguistics at the Macaulay Honors College at Brooklyn College. During her clinical placements she has developed a passion for pediatric audiology, leading her to pursue research on the impact of musical training on auditory perception and processing in children.
Headshot for the Speech-Language-Hearing Sciences program
Alexis Leiderman, B.S. is a first year graduate student in the doctoral program of Audiology at the CUNY Graduate Center. She received her Bachelor’s of Science from Binghamton University with a concentration in Integrative Neuroscience. Her current research interests involve auditory processing and cochlear implants, especially how music is perceived by cochlear implant users. She is a recipient of the fellowship for the Audiology and Auditory Evoked Potentials Laboratory under Dr. Brett Martin. She am excited to continue my passion for research and give back to the field of Audiology as a member of the AAEP lab.
Headshot for the Speech-Language-Hearing Sciences program
Natalie Lisiewics, B.A., is a first year AuD student. She received her bachelor’s degrees in Communication Arts, Sciences, and Disorders, and in Linguistics. She also minored in neuroscience in her undergraduate studies. She graduated from Macaulay Honors College at Brooklyn College and is interested in speech and language processing.
Headshot for the Speech-Language-Hearing Sciences program
Jenna Marie Sparacio, B.S., is currently a first year student in The Graduate Center’s Au.D Program. She graduated from Hunter College with a Bachelor’s degree in linguistics and rhetoric and minor concentrations in music and psychology. Her research interests include hearing protection for musicians, tinnitus and vestibular research.
  Lindsay Sarah Brown
  Allison Leigh Mazzella
Lab Graduates
 

Michelle Kraskin, Au.D. The Acoustic Change Complex:  An investigation of stimulus presentation rate in infants.  Au.D. awarded 2009.

Yining Victor Zhou, Ph.D., CCC-SLP. The role of amplitude envelope in lexical tone perception:  Evidence from Cantonese lexical tone discrimination in native speakers with normal hearing. Ph.D. awarded 2012.

Zhenya Yubliler, Au.D. Eliciting the acoustic change complex in 6-12 month old infants using continuously alternating stimulus presentation.  Au.D. awarded 2012.

Claire Jakimetz, Au.D. Eliciting the acoustic change complex in 0-6 month old infants using continuously alternating stimulus presentation. Au.D. awarded 2012.

Swapna Nataraj, Au.D. Test-retest reliability of the acoustic change complex in infants 1-12 months old. Au.D. awarded 2012.

Jillian Blinkoff, Au.D. Test-retest reliability of the acoustic change complex in infants 12-24 months old. Au.D. awarded 2012.

Meghan Brady, Au.D. The effects of decreased audibility produced by high-pass noise masking of speech stimuli on the acoustic change complex (ACC). Au.D. awarded 2013.

Anna Palterman, Au.D. The effects of low-pass noise masking on the acoustic change complex (ACC). Au.D. awarded 2013.

Derek Petti, Au.D. The salience and perceptual weight of secondary acoustic cues for fricative identification in normal hearing adults. Au.D. awarded 2014.

Michael Higgins, Au.D. The audiometric profile of young adults who report difficulty with speech perception in noise. Au.D. awarded 2015.

Lee Jung An, Ph.D. Effects of native language on perception and neurophysiologic processing of English /r/ and /l/ by native American, Korean and Japanese listeners.  Ph.D. awarded 2016.

Martin, B.A. and Ross, M. (1994). The effects of requests for repetition on the intensity of talkers' speech. American Journal of Audiology, 3, 69-72.

Publication Link

Stapells, D.R., Gravel, J.S. and Martin, B.A. (1995). Thresholds for auditory brainstem responses to tones in notched noise from infants and young children with normal hearing or sensorineural hearing loss, Ear and Hearing, 16, 361-371.

Publication Link

Stapells, D.R. and Martin, B.A. (1996). The effects of noise masking on the cortical auditory event-related potentials to speech sounds /ba/ and /da/. Canadian Acoustics/Acoustique Canadienne, 24 (3), 74.

Martin, B.A., Sigal, A., Kurtzberg, D. and Stapells, D.R. (1997). The effects of high-pass noise masking on cortical event-related potentials to speech sounds /ba/ and /da/. Journal of the Acoustical Society of America, 101, 1585-1599.

Publication Link

Whiting, K.A., Martin, B.A. and Stapells, D.R. (1998). The effects of broadband noise masking on cortical event-related potentials to speech sounds /ba/ and /da/. Ear and Hearing, 19, 218-231.

Publication Link

Ostroff, J.M., Martin, B.A. and Boothroyd, A. (1998). Cortical evoked responses to acoustic change within a syllable. Ear and Hearing, 19, 290-297.

Publication Link

Martin, B.A. and Boothroyd, A. (1999). Cortical, auditory, event-related potentials in response to periodic and aperiodic stimuli with the same spectral envelope. Ear and Hearing, 20, 33-44.

Publication Link

Martin, B.A., Kurtzberg, D. and Stapells, D.R. (1999). The effects of high-pass noise masking on N1 and the mismatch negativity to speech sounds /ba/ and /da/. Journal of Speech and Hearing Research, 42, 271-286.

Publication Link

Martin, B.A. and Boothroyd, A. (2000). Cortical, auditory, evoked potentials to changes of spectrum and amplitude. Journal of the Acoustical Society of America, 107, 2155-2161.

Publication Link

Tremblay, K.L., Friesen, L., Martin, B.A. and Wright, R. (2003). Test-retest reliability of cortical evoked potentials using naturally-produced speech sounds. Ear and Hearing, 24, 225-322.

Publication Link

Martin, B.A., Shafer, V.L., Morr, M. Kreuzer, J.A. and Kurtzberg, D. (2003). Maturation of mismatch negativity: Scalp current density analysis. Ear and Hearing, 24, 463-471.

Publication Link

Martin, B.A. and Stapells, D.R. (2005). The effects of low-pass noise masking on auditory event-related potentials to speech. Ear and Hearing, 26, 195-213.

Publication Link

Martin, B.A., Tremblay, K.L., and Stapells, D.R. (2007). Chapter 23. Principles and Applications of Cortical Auditory Evoked Potentials. In J. Eggermont, M. Don, and R. Burkard (Eds.), Auditory Evoked Potentials, Lippincott, Williams and Wilkins.

Martin, B.A. (2007). Can the acoustic change complex be recorded in an individual with a cochlear implant?: Separating neural responses from cochlear implant artifact. Journal of the American Academy of Audiology. 18, 126-140.

Publication Link

Martin, B.A., Korczak, P. and Tremblay, K. (2008). Speech-evoked potentials: From the Laboratory to the Clinic. Ear and Hearing, 29, 285-313.

Publication Link

Martin, B.A., Boothroyd, A., Ali, D. and Leach-Berth, T. (2010). Stimulus presentation strategies for eliciting the acoustic change complex:  Increasing efficiency. Ear and Hearing, 31, 356-366.
 
Shafer, V.L., Schwartz, R., and Martin, B.A. (2011).  Evidence of deficient central speech processing in children with specific language impairment: the T-Complex.  Clinical Neurophysiology, 122(6), 1137-1155.
 
Tan, C-T., Guo, B., Martin, B. and Svirsky, M. (2012). Behavioral and physiological measure for pitch matching between electrical and acoustical stimulation in cochlear implant patients. Journal of the Acoustical Society of America, 131(4), 3388.
 
Zhou, Y. and Martin, B.A. (2012). The role of amplitude envelope in Cantonese lexical tone perception:  Implications for cochlear implants. Proceedings of the 6th International Conference on Speech Processing, Shanghai, China, pp. 629-632.
 
Tan, C.-T., Martin, B., Guo, B., Prosolovich, K., and M. Svirsky, (2012). Auditory evoked responses to pitch matched stimuli in cochlear implant users with residual hearing in the other ear. Proceedings of the 6th International Symposium on Objective Measures in Auditory Implants, Amsterdam, The Netherlands, September, 2012.
 
Svirsky, M., Fitzgerald, M.B., Neuman, A., Sagi, E., Tan, C-T., Ketten, D., and Martin, B. (2012). Current and planned cochlear implant research at New York University Laboratory For Translational Auditory Research, Journal of the American Academy of Audiology, 23 (6), 1-15.
 
Wagner, M., Shafer, V. Martin, B. and Steinschneider, M. (2012). The phonotactic influence on perception of a consonant cluster /pt/ by native-English and native-Polish listeners: A behavioral and event-related potential (ERP) study. Brain and Language, 123, 30-41.
 
An, L., Martin, B.A., and Long, G. (2013). Effects of phonetic experience on neural processing of English /r/ and /l/ by Korean and Japanese listeners. Proceedings of Meetings on Acoustics, Montreal, Canada, Vol. 19 (June), p. 060114.
 
Wang, W-J., Tan, C-T., and Martin, B.A. (2013).  Auditory evoked responses to a frequency glide following a static pure tone.  Proceedings of Meetings on Acoustics, Montreal, Canada, Vol. 19 (June), p. 050122.
 
Wagner, M., Shafer, V.L., Martin, B. and Steinschneider, M. (2013). The effect of native-language experience on the sensory-obligatory components, the P1-N1-P2 and the T-complex. Brain Research, 1522, 31-37.

Tan, C.T., Martin, B.A., and Svirsky, M. (in press). Pitch matching between electrical stimulation of a cochlear implant and acoustic stimuli presented to a contralateral ear with residual hearing.  Journal of the American Academy of Audiology.

Selected Abstracts / Recent Presentations

 

Martin, B.A. (Invited speaker). Cortical auditory evoked potential indices of acoustic change. Program of the New Jersey Speech-Language-Hearing Association Convention. May 1, 2003, Atlantic City, NJ.

Martin, B.A., Lee, W.T., and Kurtzberg, D. Test-retest reliability of the N1-P2 acoustic change complex. Biennial Symposium of the International Evoked Response Audiometry Study Group, June 11, 2003, Tenerife, Spain.

Martin, B.A. (invited speaker). Calibration of short duration sounds used for ERP studies. Cognitive Neurophysiology Laboratory, Albert Einstein College of Medicine, August 29, 2003.

Shafer, V.L., Datta, H., MacRoy, M., Neumann, Y., Garrido, K., McDonald, T., Martin, B.A. Event-related potentials: Future diagnostic and therapeutic tool. Program of the 2003 American Speech-Language-Hearing Association Convention, Chicago, IL, November 13, 2003.

Neuvirth, C., Shea-Miller, K., Martin, B. & Cinberg, J. (2004, November). A closer look: post caloric rebound nystagmus by position change. Program of the 2004 American Speech-Language-Hearing Association Convention, Philadelphia, PA.

Martin, B.A., Lee, W.T., Steinschneider, M. and Kurtzberg, D. Test-retest reliability of the P1-N1-P2 acoustic change complex. Program of the Annual Meeting of the Association for Research in Otolaryngology, Daytona Beach, FL, February 25, 2004.

Martin, B.A. (Invited speaker). Speech-evoked cortical potentials: From the laboratory to the clinic. Program in Speech-Language-Hearing Sciences, Graduate Center, City University of New York, February 9, 2005.

Martin, B.A. (Invited speaker). Speech-evoked cortical potentials: From the laboratory to the clinic. Institute for the Study of Child Development, New Brunswick, NJ October, 2006.

Cone-Wesson, B., Martin, B.A., Tremblay, K.L. (Invited speaker). Cortical auditory evoked potentials: Better ‘late’ than never. American Speech-Language-Hearing Association, Miami, FL, November 17, 2006.

Leach-Berth, T., Ali, D., and Martin, B.A. Stimulus presentation methods for the acoustic change complex: Averaging Efficiency, Association for Research in Otolaryngology, Denver, CO, February 14, 2007.

Kushla, K.J. and Martin, B.A. Test-retest reliability of neurophysiologic responses to change in vowels, American Academy of Audiology, Denver, CO, April 20, 2007.

Martin, B.A. and Kushla, K.J. Test-retest reliability of neurophysiologic responses to acoustic change within vowels, Society for Cognitive Neuroscience, New York, NY, May 6, 2007.

Patel, P., Duff, M., Gomes, H., and Martin, B.A. Electrophysiological measures and selective attention to acoustic and phonemic cues, Society for Cognitive Neuroscience, New York, NY, May 7, 2007.

Gilichinskaya, Y., Petti, D., and Martin, B.A. The effects of reverberation on the neurophysiologic processing of speech. IX Annual Celebration of Science, Engineering and Mathematics at the Graduate Center, New York, NY, April 11, 2008. 

Hinchey, T., Korczak, P., Martin, B.A. and Pallett, S. Effect of hearing aid gain on slow cortical auditory evoked potentials. Maryland Academy of Audiology, Baltimore, MD, September 11, 2008.

Hinchey, T., Korczak, P., Martin, B.A. and Pallett, S. Effects of hearing aid gain on slow cortical auditory evoked potentials. Association for Research in Otolaryngology, Baltimore, MD, February 18, 2009. 

Hinchey, T., Korczak, P., Martin, B.A., and Pallett, S. Effect of hearing aid gain on ACC responses to speech. American Academy of Audiology, Dallas, TX, April 2, 2009.

Hall, J.W. and Martin, B.A. (Invited featured presentation). Auditory Evoked Response Advances: From the Laboratory to the Clinic. American Academy of Audiology, Dallas, TX, April 3, 2009.

Tan, C., Martin, B. and Svirsky, M. Methods for tracking behavioral and physiological changes in acoustic-electric pitch matching. 2009 Conference on Implantable Auditory Prostheses, Lake Tahoe, CA, July 14, 2009.

Martin, B.A. (invited speaker). The acoustic change complex:  From the laboratory to the clinic.  Grand rounds, Long Island Jewish Medical Center, Department of Otolaryngology, New Hyde Park, NY, October 12, 2009.

Martin, B.A. (invited speaker). The acoustic change complex:  From the laboratory to the clinic.  Illinois Association of Audiology, Chicago, IL, January 21, 2010.

Martin, B.A. (invited speaker).  Acoustic change complex using auditory signals.  Objective Measures in Auditory Implants Conference, St. Louis, MO, September 25, 2010.

Tan, C-T., Martin, B.A. and Svirsky, M. Tracking behavioral and electrophysiological changes in acoustic-electric pitch matching.  Objective Measures in Auditory Implants Conference, St. Louis, MO, September 24, 2010.

Czarniak, L.J., Firszt, J.B., Martin, B.A., Ponton, C. Effects of intensity on cortical responses in cochlear implant recipients and normals. Objective Measures in Auditory Implants Conference, St. Louis, MO, September 24, 2010.
 
Martin, B.A., Boothroyd, A., Ali, D. and Berth, T. Stimulus Presentation Strategies for Eliciting the Acoustic Change Complex.  American Speech-Language-Hearing Association Convention. Philadelphia, PA, November 19, 2010.

Wagner, M., Shafer, V., Martin, B., and Steinscheider, M. English and Polish language frequency effects in neural speech processing.  American Speech-Language-Hearing Association Convention, Philadelphia, PA, November 19, 2010.

Martin, B.A., Wang, W-J., An, L. Behavioral and neurophysiological indices of speech processing in noise in adults and in children.  Association for Research in Otolaryngology, Baltimore, MD, February 18, 2011.

Martin, B.A.  The acoustic change complex:  From the laboratory to the clinic.  Program in Speech-Language-Hearing Science Conference on Translational Research.  New York, NY, April 1, 2011.

Wagner, M., Shafer, V.L., Martin, B., and Steinschneider, M. Early and late stages of neural speech processing in native-English and native-Polish listeners. Society for Cognitive Neuroscience, San Francisco, CA, April 2, 2011.

Wagner, M., Shafer, V., Martin, B. and Steinschneider, M. Early and late stages of neural speech processing in native-English and native-Polish listeners:  A behavioral and ERP study.  Faculty Research Forum, St. John’s University, Queens, New York, April 13, 2011.

Tan, C-T., Guo, B., Martin, B.A., and Svirsky, M. Behavioral and physiological measures of frequency mismatch in cochlear implants. Conference on Implantable Auditory Prostheses, Pacific Grove, CA, July 25, 2011.

Wagner, M., Shafer, V.L., Martin, B. and Steinschneider, M. Early and late stages of neural speech processing in native-English and native-Polish listeners:  A behavioral and ERP study.  New Trends in Experimental Psycholinguistics (ERP), Madrid, Spain, September 29, 2011.

Martin, B.A., Antony, R., Wang, W., and An, L. Speech-in-noise processing in adults and children. American Speech-Language-Hearing Association Convention, San Diego, CA, November 17, 2011.

Martin, B.A. (invited speaker). The acoustic change complex for the evaluation of speech discrimination capacity:  Comparison to MMN and P3. The 6th Conference on Mismatch Negativity (MMN) and its Clinical and Scientific Application. New York, NY, May 1, 2012.

Martin, B.A., Shafer, V., Wroblewski, M., and An, L. A comparison of the functional significance of acoustic change complex and mismatch negativity for vowel processing in native-English listeners and late learners of English.  The 6th Conference on Mismatch Negativity (MMN) and its Clinical and Scientific Application. New York, NY, May 4, 2012.
 
Tan, C-T., Guo, B., Martin, B., and Svirsky, M. Behavioral and physiological measure for pitch matching between electrical and acoustical stimulation in cochlear implant patients. Acoustics 2012, Hong Kong, May 16, 2012.

Tan, C.-T., Martin, B., Guo, B., Prosolovich, K., and M. Svirsky, (2012). Auditory evoked responses to pitch matched stimuli in cochlear implant users with residual hearing in the other ear. Proceedings of the 6th International Symposium on Objective Measures in Auditory Implants 2012, Amsterdam, The Netherlands, September, 2012.
 
Martin, B.A., Shafer, V.L., Wroblewski, M., & An, L. Functional Significance of the Acoustic Change Complex, Mismatch Negativity, and P3a for Vowel Processing In Native-English Listeners and Late Learners of English. 164th Meeting of the Acoustical Society of America, Kansas City, Missouri, October 23, 2012

An, L., Martin, B.A., and Long, G. (2013). Effects of phonetic experience on neural processing of English /r/ and /l/ by Korean and Japanese listeners. 165th Meeting of the Acoustical Society of America, Montreal, Canada, June 3, 2013

Wang, W-J., Tan, C-T., and Martin, B.A. (2013).  Auditory evoked responses to a frequency glide following a static pure tone.  165th Meeting of the Acoustical Society of America, Montreal, Canada, June 5, 2013.

Wang, W-J., Tan, C-T., and Martin, B.A. (2013). Auditory evoked responses to a frequency glide following a static pure tone. 12th Annual Eastern Auditory Retreat, Boston, MA, June 28, 2013.

Tan, C-T., Martin, B., Seward, K., Prosolovich, K., Guo, B., Glassman, E., and Svirsky, M. Auditory evoked responses to pitch-matched stimuli in unilateral cochlear implant users with residual hearing in the contralateral ear. 2013 Conference on Implantable Auditory Prostheses, Lake Tahoe, CA, July 15, 2013.

Wang, W-J., Martin, B.A., Long, G.R. and Tan, C-T. (2014). Can frequency discrimination be indexed by electrophysiological measures? Journal of the Acoustical Society of America, 135, 2415.

Tan, C.T., Martin, B., Seward, K., Prosolovich, W, Guo, B., Glassman, E., and Svirsky, M. (2014) " Auditory evoked responses to pitch matched electroacoustic stimuli in unilateral cochlear implant users with residual hearing in the contralateral ear", Proceedings of the 37th ARO Mid-Winter Meeting, The Manchester Grand Hyatt Hotel, San Diego, California, USA, February.

Tan, C.-T., Glassman, E., Oh., S., Wang, W., Martin, B., and Svirsky, M. (2014) “Auditory evoked responses to perceived quality of vocoded speech”, J. Acoust. Soc. Am., 135, 2411.

Tan, C.-T., Martin, B., Guo, B., Prosolovich, K., and M. Svirsky, (2014) “Auditory evoked responses to pitch matched electroacoustic stimuli in unilateral cochlear implant users with residual hearing in the unimplanted ear”, Proceedings of the 8th International Symposium on Objective Measures in Auditory Implants 2014, Toronto, Canada, October.

An, L., Martin, B.A., & Long, G.R. (2015). Processing of English /r/ and /l/ by native Korean, Japanese, and American English listeners using the acoustic change complex.  38th Annual MidWinter Meeting of Association for Research in Otolaryngology, Baltimore, MD, February 21.

Wang, W-J., Martin, B.A., Long, G., & Tan, C-T. (2015). Cortical encoding of frequency glides. 38th Annual MidWinter Meeting of Association for Research in Otolaryngology, Baltimore, MD, February 24.
 
An, L., Martin, B.A., & Long, G.R. (2015). Encoding of English /r/ and /l/ by American, Korean, and Japanese listeners using the acoustic change complex (ACC). XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 13.
 
An, L., Martin, B.A. (2015). Discrimination of English /r/ and /l/ by American, Korean, and Japanese listeners using the Mismatch Negativity (MMN). XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 11.
 
Martin, B.A. (2015). The effects of stimulus alternation rate on the efficiency of the acoustic change complex in infants and toddlers. XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 13.
 
Martin, B.A. (2015). Stimulus presentation strategies for eliciting the acoustic change complex in infants. XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 13.
 
Martin, B.A. (2015). The effects of reverberation on auditory evoked potentials. XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 11.
 
Tan, C.T., Martin, B.A., Seward, K, Prosolovich, K., Guo, B., Glassman, E., Svirsky, M. (2015). Auditory evoked responses to electro-acoustic pitch matching in unilateral cochlear implant users with contralateral residual hearing.  XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 11.

Martin, B.A. & Moncrieff, D. (2015).  Cortical Response V.  Session moderators.  XXIV Biennial Symposium of the International Evoked Response Audiometry Study Group (IERASG), Busan, Korea, May 14.

Iannotta, J.A., Shafer, V.L., Martin, B.A., Gomes, H. (2016). Maturation of speech discrimination and attentional requirements in late childhood.  Cognitive Neuroscience Society, New York, NY, April 3. 

Higgins, M.R. & Martin, B.A. (2016). The audiometric profile of young adults who report difficulty with speech perception in noise. New York State Speech-Language-Hearing Association Convention, Saratoga Springs, NY, April 8.

Zhou, V. & Martin, B.A. (2016). Cantonese Tone Discrimination using Amplitude Envelope:  Implications for Cochlear Implants.  Speech Prosody 2016, Boston, MA, June 3.

Equipment in this laboratory includes:

  • A 64-channel Neuroscan system capable of stimulus presentation, electrophysiological data acquisition, behavioral response collection and data processing
  • A portable 32-channel system for off-site data collection
  • A Brain Vision system
  • BESA for brain electrical source analysis
  • Matlab
  • EEGLab
  • Computer stations equipped for data processing and analysis
  • GSI 61 audiometer for hearing evaluation
  • Tympstar middle ear analyzer
  • Audioscan system for electroacoustic and real ear analyses of hearing aids
  • Two sound booths
  • Intelligent visual reinforcement audiometry system (IVRA)
  • E-prime for behavioral research experiments
  • Software for the recording, generation, and editing of sound
  • Software for Klatt synthesis of speech

Child Language Laboratory

The goal of the Child Language Laboratory is to understand the nature and underlying causes of childhood language impairments. To this end, we study the relationship between speech perception, the processing of language, and the brain mechanisms underlying language processing in production in young children acquiring language typically and atypically.

Facebook   Twitter   Child Language Laboratory

Lab Meetings: Wednesdays 1:00pm in Room 7400.15

Mission Statement:

Our goal is to understand the nature and underlying causes of childhood language disorders with the long-term aim of developing improved approaches to assessment and intervention. We study language processing underlying speaking and understanding in children acquiring language typically and atypically. Included among the populations we study are monolingual and bilingual children with Developmental Language Disorders (also known as SLI) acquiring English and other languages (Japanese, Hebrew, Spanish, Brazilian Portuguese, Russian, and Thai), children with Autism Spectrum Disorders, children with Auditory Processing Disorders, and children who use cochlear implants. The laboratory consists of three areas: (1) Computer-based word and sentence processing for language production and comprehension, (2) Brain responses to spoken language, and (3) Eye tracking for language processing. The Word and Sentence Processing Area includes on-line and off-line computers, digital audio and video recorders, and specialized software that allows us to construct and administer language processing tasks for pre-school and school-aged children. The Electrophysiology Area includes a 128-channel Neuroscan and associated software to conduct studies of the brain bases of language processing. We also have Tobii Eye Trackers, which use eye-gaze and pupil size as measures of language processing. The laboratory has engaged in cooperative projects with the Eden II School and New York Eye and Ear Infirmary.

Looking for children research participants

Laboratory Director:

Richard G. Schwartz, Ph.D. 
Presidential Professor
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309

Phone: 212-817-8804
Fax: 212-817-1537
email: rschwartz@gc.cuny.edu

Highlights: Dr. Schwartz was interviewed by Scientific American for their article on Specific Language Impairment in children:
Opening a Window into the Minds of Language-Impaired Children

Faculty  
Richard G. Schwartz faculty photo
Richard G. Schwartz, Ph.D., Director, is a Presidential Professor of Speech and Hearing Sciences at The Graduate School and the University Center of the City University of New York. He attended McGill University, received his M.S. in Speech Pathology from the University of South Florida in 1974, and his Ph.D. in Speech Pathology and Developmental Psychology from the University of Memphis (formerly Memphis State University) in 1978. He is a certified and licensed speech-language pathologist. Dr. Schwartz has also held academic appointments at the University of Pittsburgh, Purdue University, Tel Aviv University, Weill Medical College of Cornell University Albert Einstein College of Medicine. He is currently the Director of Research for the Ear Institute: Hearing and Learning Center/Cochlear Implant Center at the New York Eye and Ear Infirmary. He has published widely on speech and language disorders in children in peer-reviewed scientific journals, contributed numerous chapters in academic textbooks and monographs, and has served as the editor of the Journal of Speech, Language and Hearing Research . He is the editor of the Handbook of Child Language Disorders published by Psychology Press. Dr. Schwartz’s research has been supported by grants from the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health since 1979. He has served as the chair/organizer of numerous national and international conferences. His current research interests include speech and language processing in children with Specific Language Impairment, children with Cochlear Implants, and children with Autism as well as the neurobiology of childhood language impairments.

Dr. Schwartz CV

Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Erin Reilly CCC-SLP, TSLD received her undergraduate degree in Speech and Language Disorders at the State University College at Geneseo, NY and Master’s Degree in Speech-Language Pathology at Edinboro University of Pennsylvania. Erin is a New York State licensed Speech Language Pathologist and Teacher of Speech and Language Disabilities. She has provided therapy and evaluated students with a range of disabilities in Maine School Administrative District #49, Rainbow Preschool in Commack, NY, and the New York City Department of Education. Currently, she is attending CUNY Graduate center as a PhD student. Her area of interest includes language development in children with autism.
Headshot for the Speech-Language-Hearing Sciences program
Georgia Drakopoulou received her Bachelor degree in Speech and Language Therapy from the Technological Educational Institute of Patras, in Greece. She joined the doctoral program in Speech-Language-Hearing Sciences at the Graduate Center in Fall 2012 and she is currently a Level II student. She is also working as a research assistant at the Language and Hearing Research Lab at NYEE, investigating lexical access in children with cochlear implants under the supervision of Dr. Richard Schwartz. Her research interests include language development in children with autism spectrum disorders, with a focus on the semantics and pragmatics in discourse contexts.
Headshot for the Speech-Language-Hearing Sciences program
Katsiaryna (Katya) Aharodnik, M.A. joined the Child Language Lab in the fall of 2013. She received a Bachelor’s and a Master’s degree in Applied Linguistics from Montclair State University, NJ. At Montclair State University, Katya conducted research in conjunction with the researchers from Charles’ University, Prague in morphosyntactic processing of native speakers and learners of Slavic languages. Currently, Katya is a Level I student and a research assistant in the Child Language Lab. Her research interests are bilingualism and developmental language disorders, language acquisition and the neurobiology of language processing, with a focus on morpho­syntax.
Headshot for the Speech-Language-Hearing Sciences program
Bernadette Ojukwu, B.S. joined the PhD program in Speech Language and Hearing Sciences in the Fall of 2015. Before starting the program, she interned in a psycholinguistics lab at the University of Massachusetts at Amherst, the same institution at which she received her bachelor’s in Biology. She has a variety of research interests ranging from syntax to pragmatics. She is also interested in understanding how cognition plays a role in language processing.
Headshot for the Speech-Language-Hearing Sciences program
Thorfun Gehebe, M.S., CCC-SLP is a speech-language pathologist with clinical experience in child language impairments, speech sound disorders, augmentative alternative communications, fluency disorders, voice disorders, and dysphagia. She graduated from the College of Saint Rose with her B.S. and M.S. in Communication Sciences and Disorders. Thorfun is currently the lab manager for the Child Language Laboratory. Her research interests are cross-linguistic studies in developmental language disorders/specific language impairments, cognitive control, and bilingualism.
Headshot for the Speech-Language-Hearing Sciences program
Katherine Paulino, CF-SLP, TSSLD Bilingual Extension received her Undergraduate and Masters degree in Speech-Language and Hearing at Lehman College in Bronx, NY. Katherine has performed diagnostics evaluations and intervention for varied populations (e.g. early childhood, school age, and adult) with a variety of communication disorders. She has also worked with individuals with Autism Spectrum Disorder, articulation and phonological disorders, Aphasia, language learning disabilities, and hearing loss. Katherine currently works with Spanish-English bilingual children with developmental language delays within a classroom setting. Her research interests entail developmental language impairments, working memory, executive functions, and resistance to interference.

 

Selected Publications * indicates Dr. Schwartz’s students or trainees

Leonard, L.B., & Schwartz, R.G. (1977). Focus characteristics of single word utterances after syntax. Journal of Child Language, 5, 151 158.

Leonard L.B., Schwartz, R.G., Folger, M.K., & Wilcox, M.J. (1978). Some aspects of child phonology in imitative and spontaneous speech. Journal of Child Language, 5, 403 416.

Schwartz, R.G., & Folger, M.K. (1978). Sensorimotor development and descriptions of child phonology: A preliminary view of phonological analysis for Stage I Speech. Papers and Reports in Child Language, 13, 8 15.

Leonard, L.B., Schwartz, R.G., Folger, M.K., Newhoff, M., & Wilcox, M.J. (1979). Children's imitations of lexical items. Child Development, 50, 19 27.

Schwartz, R.G. (1980). Presuppositions and children's metalinguistic judgments: Concepts of life and awareness of animacy restrictions. Child Development, 51, 364 371.

Schwartz, R.G., & Leonard, L.B. (1980). Words, objects and actions in early lexical acquisition. Papers and Reports in Child Language Development, 9, 29 36.

Schwartz, R.G., Leonard, L.B., Folger, M.K., & Wilcox, M.J. (1980). Early phonological behavior in normal and language disordered children: Evidence for a synergistic view of language disorders. Journal of Speech and Hearing Disorders, 45, 357 378.

Schwartz, R.G., Leonard, L.B., Wilcox, M.J., & Folger, M.K. (1980). Again and Again: Reduplication in child phonology .Journal of Child Language, 7, 75 88.

Leonard, L.B., Schwartz, R.G., Morris, B., & Chapman, K. (1981). Factors influencing early lexical acquisition: lexical orientation and phonological composition, Child Development, 52, 882 887.

Leonard, L.B., Schwartz, R.G., Chapman, K., Rowan, L., Prelock, P., Terrell, B., Weiss, A., & Messick, C. (1982). Early lexical acquisition in children with specific language impairment. Journal of Speech and Hearing Research, 25, 554 564.

Schwartz, R.G., & Leonard, L.B. (1982). Do children pick and choose? Phonological selection and avoidance in early lexical acquisition. Journal of Child Language, 9, 319 336.

Schwartz, R.G. (1983). The role of action in early lexical acquisition. First Language, 4, 5 20.

Schwartz, R.G., & Leonard, L.B. (1983). Some further comments on reduplication in child phonology. Journal of Child Language, 10, 441 448.

Schwartz, R.G., & Terrell, B.T.* (1983). The role of input frequency in early lexical acquisition. Journal of Child Language, 9, 57 64.

Schwartz, R.G., & Leonard, L.B. (1984). Words, objects and actions in early lexical acquisition. Journal of Speech and Hearing Research, 27, 119 127.

Terrell, B.T.* , Schwartz, R.G., Prelock, P.* & Messick, C.* (1984). Symbolic play in normal and language impaired children. Journal of Speech and Hearing Research, 27, 424 429.

Schwartz, R.G., Chapman, K.*, Prelock, P.A.*, Terrell, B.Y.*, & Rowan, L.E.* (1985). Facilitation of early syntax through discourse structure. Journal of Child Language, 12, 199 207.

Schwartz, R.G., & Camarata, S.* (1985). Examining relationships between input and language development: Some statistical issues. Journal of Child Language, 12, 199 207.

Camarata, S.*, & Schwartz, R. (1985). Production of object words and action words: Evidence for a relationship between phonology and semantics. Journal of Speech and Hearing Research, 28, 323 330.

Leonard, L.B., Camarata, S.*, Schwartz, R.*, Chapman, K.*, & Messick, C. (1985). Homonymy in the speech of children with specific language impairment. Journal of Speech and Hearing Research, 28, 215 224.

Schwartz, R.G., Chapman, K.*, Terrell, B.Y.*, Prelock, P.A.*, & Rowan, L.E.* (1985). Facilitating word combination in language impaired children through discourse structure. Journal of Speech and Hearing Disorders, 50, 31 39.

Schwartz, R., & Leonard, L.B. (1985). Lexical imitation and acquisition in language impaired children. Journal of Speech and Hearing Disorders, 50, 141 149.

Schwartz, R.G., Leonard, L.B., Messick, C.*, & Chapman, K.* (1987). Acquisition of object names in children with specific language impairment: action context and word extension. Applied Psycholinguistics, 8, 233 244.

Schwartz, R.G., Leonard, L.B., Frome Loeb, D.*, & Swanson, L.* (1987). Attempted sounds are sometimes not: an expanded view of phonological selection and avoidance. Journal of Child Language, 14, 411 419.

Leonard, L.B., Schwartz, R.G., Swanson, L.*, & Frome Loeb, D.* (1987). Some conditions that promote unusual phonological behavior in children. Journal of Clinical Linguistics and Phonetics, 1, 23 34.

Pollock, K. E.*, & Schwartz, R. G. (1987). Phonological perception of early words. Papers and Reports on Child Language Development, 26, 88 96.

Pollock, K.E.*, & Schwartz, R.G. (1988). Structural aspects of phonological development: case study of a disordered child. Language, Speech, and Hearing Services in the Schools, 19, 5-16.

Schwartz, R.G. (1988). Early action word acquisition in normal and language impaired children. Applied Psycholinguistics, 9, 111-122.

Terrell, B.Y.*, & Schwartz, R.G. (1988). Object transformations in the play of language impaired children. Journal of Speech and Hearing Disorders, 53, 449-456.

Leonard, L.B., Schwartz, R.G., Allen, G.D., Swanson, L.A.*, & Loeb, D.F.* (1989). Unusual phonological behavior and the avoidance of homonomy in children. Journal of Speech and Hearing Research, 32, 583-590.

Tenorio, M.F., Schwartz, R.G., & Tom, M.D.* (1990). Unrestricted text-to-speech conversion with a distributed adaptive network. Pattern Recognition Letters.

Loeb, D.*, & Schwartz, R.G. (1990). Language characteristics of a linguistically precocious child. First Language, 10, 1-18.

Tenorio, M.F., Schwartz, R.G., & Tom, M.D.* (1990). Adaptive networks as a model for human speech development.Computers in Human Behavior, 6, 291-313.

McGregor, K.*, & Schwartz, R. G. (1992). Converging evidence for underlying phonological representation in a misarticulating child. Journal of Speech and Hearing Research, 35, 596-603.

Schwartz, R.G. (1993). Clinical applications of advances in phonological theory. Language Speech and Hearing Services in the Schools, 23, 269-276.

Davidson, C.*, & Schwartz, R.G. (1995). Semantic boundaries in the lexicon. Linguistics and Education, 7, 47-94.

Valafar, F., Homayoun, V. Ersoy, O., Schwartz, R.G. (1995). Comparative studies of two neural network architectures for modeling of human speech production. IEEE International Conference on Neural Networks - Conference Proceedings, 4, 1995, 2056-2061.

Schwartz, R. G. (1995). The effect of familiarity on word duration in children's speech. Journal of Speech and Hearing Research, 38, 76-84.

Schwartz, R.G., & Goffman, L.* (1995). Metrical patterns of words and production accuracy. Journal of Speech and Hearing Research, 38, 876-888.

Gates, G., Schwartz, R.G., et al. (1997). NIH Consensus conference: Cochlear implants in children and adults. Journal of the American Medical Association, 274, 1955-1961.

Wallace, I., Gravel, J., Schwartz, R.G., & Ruben, R. (1996). Otitis media, parental linguistic style, and language skills of 2-year olds. Developmental and Behavioral Pediatrics, 17, 27-35.

Goffman, L.E.*, & Schwartz, R.G. (1996). Information level and young children’s phonological accuracy. Journal of Child Language, 23, 337-348.

Schwartz, R. G., Petinou, K.*, Goffman, L.*, Lazowski, J.* & Cartusciello, C.* (1996). Young children’s production of syllable stress: An acoustic analysis. Journal of the Acoustical Society of America, 99, 3192-3200.

Mody, M.*, Schwartz, R.G., Gravel, J.S., & Ruben, R.J. (1999). Speech perception in children with and without histories of otitis media. Journal of Speech, Language, and Hearing Research, 42, 1069-1079.

Petinou, K.*, Schwartz, R.G., Mody, M.*, & Gravel, J. (1999). The impact of otitis media with effusion on early phonetic inventories: A longitudinal prospective investigation. Clinical Linguistics and Phonetics, 13, 351-367.

Ruben, R.J., & Schwartz, R.G. (1999). Necessity versus sufficiency: Auditory input in language acquisition. International Journal of Otolaryngology, 47, 137-140.

Petinou, K.*, Schwartz, R.G., Gravel, J., & Raphael, L. (2000). A preliminary account of morphophonological perception in young children with and without otitis media. International Journal of Communication Disorders, 36, 21-42.

Shafer, V.*, Schwartz, R.G., Morr, M., Kessler, K., & Kurtzberg, D. (2000). Deviant neurophysiological asymmetry in children with language impairment. NeuroReport, 11, 3715-3718.

Shafer, V.*, Schwartz, R.G., Morr, M., Kessler, K., Kurtzberg, D., & Ruben, R. (2000). Neurophysiological indices of language impairment in children. Acta Otolaryngology, 121, 297-300.

Jacobson, P.*, & Schwartz, R.G. (2002). Production of inflectional morphology in incipient Spanish-English bilingual children with SLI. Applied Psycholinguistics, 23, 23-41.

Levey, S., * & Schwartz, R.G. (2002). Young children’s syllable omissions. Quarterly Journal of Communication Disorders, 23, 168-176.

Marton, K.*, & Schwartz, R.G. (2003). Working Memory Capacity and Language Processes in Children with Specific Language Impairment Journal of Speech, Language, and Hearing Research, 46, 1138 - 1153.

Shafer, V.L., Schwartz, R.G., & Kurtzberg, D. (2004). Language-specific memory traces of consonants in the brain.Cognitive Brain Research, 18. 242-254.

Wong, P., Schwartz, R.G., Jenkins, J. (2005). Perception and production of lexical tones by 3-Year-Old Mandarin-speaking children. Journal of Speech, Language, and Hearing Research, 48, 1065 - 1079.

Jacobson, P.*, & Schwartz, R.G. (2005) Regular and irregular past tense in bilingual children with specific language impairment. American Journal of Speech-Language Pathology, 14, 313-323.

Marton, K.*, Schwartz, R. G. and Braun, A. (2005). The effect of age and language structure on working memory performance. Proceedings from the 27th Annual Conference of the Cognitive Science Society Austin, Tx: The Cognitive Science Society, Inc., 1413-1418.

Shafer, V.L., Kessler, K.L., Morr, M.L., Schwartz, R.G., &, Kurtzberg, D. (2005). Spatiotemporal brain activity to “the” in discourse. Brain and Language, 93, 277-297

Shafer, V.L., Morr, M.L., Datta, H., Kurtzberg, D., & Schwartz, R.G., (2005) Neurophysiological indices of speech processing deficits in children with specific language impairment. Cognitive Neuroscience, 17:7, 1168-1180.

Marton, K., Schwartz, R.G., Farkas, L., Katsnelson, V. (2006). Verbal working memory and executive functions in Hungarian children with specific language impairment: A cross-linguistic analysis. International Journal of Language and Communication Disorders, 41 (6), 653-673.

Girbau, D., & Schwartz, R.G. (2007). Nonword Repetition in Spanish-speaking children with language impairment.International Journal of Language and Communication Disorders, 42, 59 - 75.

Hestvik, A., Shafer, V., Maxfield, N., Schwartz, R.G. (2007). Brain responses to filled gaps. Brain and Language, 100, 301-316.

Shafer, V.L., Ponton, C., Datta, H., Morr, M., & Schwartz, R.G. (2007). Neurophysiological indices of attention children with specific language impairment. Clinical Neurophysiology, 118, 1230-1243.

Seiger, L. & Schwartz, R.G. (2008). Lexical access during word production in school-aged children with and without SLI.International Journal of Language and Communication Disorders, 43 , 1-23.

Girbau, D. & Schwartz, R.G. (2008). Phonological working memory in incipient bilingual Spanish-English children with specific language impairment. Jounal of Communication Disorders, 41, 124-145.

Datta, H., Shafer, V.L., Morr, M., Kurtzberg, D., Schwartz, R.G. (in press). Electrophysiological indices of discrimination of long duration, phonetically similar vowels in children with typical and atypical language development. Journal of Speech, Language, and Hearing Research.

Hestvik, A., Schwartz, R. G., & Tornyova, L. (in press). Antecedent reactivation in relative clauses in children with specific language impairment and typically developing children. Journal of Psycholinguistic Research.

Velez, M., & Schwartz, R.G. (in press). Spoken word recognition in school-age children with SLI: Semantic, phonological, and repetition priming. Journal of Speech, Language, and Hearing Research.

Epstein, B., Shafer, V.L., Melara, R., & Schwartz, R.G. (2014). Can children with SLI detect cognitive conflict? Behavioral vs. electrophysiological evidence. Journal of Speech, Language, and Hearing Research, 57, 1-15. 

Girbau-Massana, D., Garcia-Martí, G., Martí-Bonmatí, L., & Schwartz, R.G. (2014). Gray-white matter and cerebrospinal fluid volume differences in children with specific language impairment and/or reading disability. Neuropsychologia, 56, 90-100.

Wechsler-Kashi, D., Schwartz, R.G., & Cleary, M. (2014). Lexical naming in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 57, 1870–1882.

Fortunato-Tavares, T., de Andrade, C.R.F., Belfi-Lopes, D.M., Tornyova, L., & Schwartz, R.G. (2015). Syntactic Assignment and working memory in children with specific language impairment, autism or Down Syndrome. Clinical Phonetics and Linguistics, 29, 499-522.

Schwartz, R.G. (2015). The when and how of input frequency effects on lexical development. Journal of Child Language, 42, 298-300.

Victorino, K.R., & Schwartz, R.G. (2015). Control of auditory attention in children with specific language impairment. Journal of Speech, Language, and Hearing Research, 58, 1245-1257.

Cantiani, C., Choudhury, N.A., Yu, Y.N., Shafer, V.L., Schwartz, R.G., Benasich, A.A. (2016). From sensory perception to lexical-semantic processing: an ERP study in nonverbal children with autism. PLoS ONE 11(8): e0161637. doi:10.1371/journal.pone.0161637.

Eichorn, N., Marton, K., Schwartz, R.G., & Melara, R.D., & Pirutinsky, S. (2016). When less can be more: Dual task effects on speech fluency in stuttering and fluent adults. Journal of Speech, Language, and Hearing Sciences 5, 415-429.

Fortunato-Tavares, T., Howell,  P., Schwartz, R.G., Juste, F.S., de Andrade, C.R.F. (2016). Children who stutter exchange linguistic accuracy for processing speed in sentence comprehension. Applied Psycholinguistics, 37, 1-25.

Rota-Donahue, C., Schwartz, R.G., Shafer, V., & Sussman, E. (2016). Perception of small frequency differences in children with auditory processing disorder and/or specific language impairment. Journal of the American Academy of Audiology, 27, 489-497.

Schwartz, R.G., A. Hestvik, Seiger-Gardner, L. & Almodovar, D. (2016). Processing binding relations in children with SLI. Journal of Speech, Language, and Hearing Sciences, 58, 1384-1394. doi:10.1044/2016_JSLHR-L-15-0107

Fortunato-Tavares, T., Schwartz, R.G., de Andrade, C.F., Marton, K., & Houston, D. (2018). Prosodic boundary effects on syntactic disambiguation in children with cochlear implants. Journal of Speech, Language, and Hearing Sciences, 60, 1-15. doi:10.1044/2018_JSLHR-L-17-0036. 
 

In Progress

Scheffler, F.L.V.*, & Schwartz, R.G. (2008 under review) Speech Perception and Lexical Effects in Specific Language Impairment. Journal of Speech, Language, and Hearing Research.

Jamulowicz, L.*, Schwartz, R. & Raphael, L. Vowel duration cues to final voicing in the perception of young children.

Scheffler, F.*, Schwartz, R., Gravel, J., Argyros, T.*, & Daum, S.*. Phonology at 24 months in children with early chronic OME and hearing loss.

Schwartz, R.G., Shafer, V., Scheffler, F. & Kessler, K. Phonological factors in lexical representation and processing by children with language impairment.

Schwartz, R.G., Seiger, L.*, & Scheffler, F.* Psychometric criteria for specific language impairment an epidemiology study.

Schwartz, R.G., Shafer, V., Kessler, K. The developmental neurophysiology of lexical access.

Schwartz, R.G., Shafer, V., Kessler, K. & Shafer, V. The electrophysiological time course of lexical access in children with specific language impairment.

Schwartz, R.G. Attention and memory in specific language impairment.

Schwartz, R., Scheffler, F., & Farkas, R. Identification criteria for the study of Specific Language Impairment: A sensitivity/specificity study.

Almodovar, D., Seiger, L., Kuntz, B. Velez, M., Schwartz, R. Precursors to atypical language development.

Hestvik, A., R.G. Schwartz, V.L. Shafer, Y. Neumann, and T. Rinker, ERPs to contextually ungrammatical verb inflection. To be submitted to Brain Research, CUNY Graduate Center: New York.

Schwartz, R.G., A. Hestvik, D. Swinney, L. Seiger, D. Almodovar, and S. Asay, The Processing of Pronouns & Reflexives in Children With SLI. To be submitted.

Books

Schwartz, R.G. (2009). The Handbook of Child Language Disorders. New York, NY: Psychology Press.

Schwartz, R.G. (2017). Handbook of Child Language Disorders 2nd edition. New York, NY: Routledge.

Auza, A.B., & Schwartz, R.G. (2017). Language Development and Disorders in Spanish-speaking Children. Dordrecht: Springer.

Contributed Chapters (* indicates students or trainees of Dr. Schwartz’s)

Leonard, L.B., Steckol, K.F., & Schwartz, R.G. (1978). Semantic relations and utterance length in child language. In F. Peng & W. von Raffler Engel (Eds.) Language acquisition and developmental kinesics (pp. 93 106). Tokyo: Bunka Hyron.

Steckol, K., & Schwartz, R.G. (1981). Do actions speak louder than words? In R. St. Clair & W. von Raffler Engel (Eds.) Developmental kinesics: The emergence of a paradigm (pp. 29 37). Baltimore: University Park Press.

Schwartz, R.G. (1982). Development of pragmatics: Early word level. In J. Irwin (Ed.) The role of pragmatics in language development (pp. 29 48). Covina, CA: Fox Point.

Schwartz, R.G. (1982). Lexical styles in early language acquisition. R. St. Clair & W. von Raffler Engel (Eds.) In Language and cognitive styles (pp. 197 222). Lisse, Holland: Swets and Zeitlinger.

Schwartz, R.G., & Prelock, P.* (1982). Cognitive aspects of phonological disorders. In J. Panagos (Ed.) Phonological Disorders language and cognitive deficits, Seminars in speech, language, and hearing (pp. 149 162). New York: Thieme.

Schwartz, R.G. (1983). Articulation disorders. In I. Meitus and B. Weinberg (Eds.) Diagnosis in speech language pathology (pp. 113 150). Baltimore: University Park Press.

Schwartz, R.G., Messick, C.M., Pollock, K.E. (1983). Some non phonological aspects of phonological assessment. In J. Locke (Ed.) Assessment of phonological disorders, Seminars in speech, language, and hearing (pp. 335 350). New York: Thieme Stratton.

Leonard, L.B. & Schwartz, R.G. (1984). Early linguistic development in children with specific language impairment. In K.E. Nelson (Ed.) Children's language Volume IV, Hillsdale, NJ: Lawrence Erlbaum Associates.

Schwartz, R.G. (1984). Analysis of one and two word utterances. In R. Naremore (Ed.) Language science recent advances (pp. 1 36). San Diego: College Hill Press.

Schwartz, R.G. (1984). The phonologic system: Normal acquisition. In J. Costello (Ed.) Speech disorders in children recent advances (pp. 25 74). San Diego: College Hill Press.

Schwartz, R.G. (1988). Phonological factors in early lexical acquisition. In M. Smith & J. Locke (Eds.) The Emergent Lexicon (pp 185 222). New York: Academic Press.

Schwartz, R.G. (1990). Interactions among language components in the speech production of phonologically disordered children. In M. Yavas (Ed.) Phonological Development and Phonological Disorders. Sao Paulo: Mereado Alberto.

Schwartz, R.G. (1991). Lexical acquisition and processing in specific language impairment. In J. Miller (Ed.) New Directions in Research on Child Language Disorders. San Diego: College Hill Press.

Schwartz, R.G. (1992). Non-linear phonology as a framework for acquisition. In R. Chapman (Ed.) Childtalk: Processes in Language Acquisition and Disorders. Mosby-Yearbook.

Schwartz, R.G. (1993). Phonological Disorders. In G. Shames, E. Wiig, & W. Secord (Eds.) Human Communicative Disorders. Columbus, OH: Merrill.

Bjorklund, D., & Schwartz, R.G. (1995). The adaptive nature of developmental immaturity. In M. Smith & J. Damico (Eds.) Childhood Language Disorders. New York: Thieme.

Schwartz, R.G., Mody, M.*, & Petinou, K.* (1998). Phonological acquisition in children with OME: Speech perception and speech production. In J. Roberts & I. Wallace (Eds.) Otitis Media, Language, and Learning in Young Children. Baltimore, MD: Paul H. Brookes.

Schwartz, R.G. (1998). Phonological Disorders. In G. Shames, E. Wiig, & W. Secord (Eds.) Human Communicative Disorders. Columbus, OH: Merrill.

Schwartz, R.G. (2001). Phonological Disorders. In G. Shames and N. Anderson (Eds.) Human communicative Disorders. Boston, MA: Allyn and Bacon.

Ruben, R. J., & Schwartz, R.G. (2001). Necessity versus sufficiency: The role of input in language acquisition. In T. Sih, A. Chinski, R. Eavey (eds.). Manual of Pediatric Otolaryngology. Interamerican Association of Otorhinolaryngology: Sao Paolo, Brazil.

Shafer, V.L., Schwartz, R.G., &, Kessler, K.L. (2003). ERP Indices of Phonological and Lexical Processing in Children and Adults. Proceedings of the 27th Annual Boston University Conference on Language Development, pp. 751-761. Somerville, MA: Cascadilla Press.

Schwartz, R.G. (2005). Phonological Disorders. In G. Shames and N. Anderson (Eds.) Human Communicative Disorders 7th Edition. Boston, MA: Allyn and Bacon.

Schwartz, R. G., Jacobson, P. F., & Sheffler, F. L. (2006). Evidence-based practice in communication disorders: language, speech, and voice disorders in children. In F. D., Burg, R. A. Polin, and A. A. Gershon (eds): Current Pediatric Therapy. Philadelphia: Saunders.

Schwartz, R.G. (2006). Would Today’s IRB Approve the Tudor Study? Ethical Considerations in Conducting Research Involving Children with Communication Disorders. In R. Goldfarb (Ed.) Ethics: A Case Study from Fluency. San Diego: Plural Publishing.

Tropper, B. & Schwartz, R.G. (2009). The neurobiology of child language disorders. In R.G. Schwartz (Ed.) The Handbook of Child Language Disorders. New York, NY: Psychology Press.

Schwartz, R.G. (2009). Specific language impairment. In R.G. Schwartz (Ed.) The Handbook of Child Language Disorders. New York, NY: Psychology Press.

Tropper, B., Hestvik, A. Shafer, V.L., & Schwartz, R.G. (2009). Questions in children with specific language impairment: an erp study. In BUCLD 32: Proceedings of the 32nd Annual Boston University Conference on Language Development. H. Chan, H. Jacob, & E. Kapia (Eds.), pp. 504-515 . Somerville, MA: Cascadilla Press

Schwartz, R.G., & Shafer, V.L. (2010). The neurobiology of specific language impairment. In M. Faust (Ed.) Neuropsychology of Language: Advances in the neural substrates of language: Toward a synthesis of basic science and Clinical Research. New York: Wiley Blackwell.

Schwartz, R.G. & Marton, K. (2010). Phonological Disorders. In N. Anderson and G. Shames and (Eds.) Human Communicative Disorders 8th Edition. Boston, MA: Allyn and Bacon.

Martohardjono, G., Phillips, I., Madsen II, C.N., Otheguy, R., Schwartz, R.G. & Shafer, V.L.  (2016). Measuring Cross-Linguistic Influence in First- and Second-Generation Bilinguals: ERP vs. Acceptability Judgments. U. Penn Working Papers in Linguistics, Volume 23.1. Philadelphia, PA: University of Pennsylvania.

Martohardjono, G., Phillips, I., Madsen II, C.N., & Schwartz, R.G. (2017) Cross-linguistic influence in bilingual processing: An ERP study. BUCLD 41: Proceedings of the 41st Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press.

Epstein, B. & Schwartz, R.G. (2017). The neurobiology of child language disorders. In R.G. Schwartz (Ed.) The Handbook of Child Language Disorders 2nd edition. New York, NY: Routledge.

Schwartz, R.G. (2017). Specific language impairment. In R.G. Schwartz (Ed.) The Handbook of Child Language Disorders 2nd edition. New York, NY: Routledge.

Schwartz, R.G., Botwinik-Rotem, I., & Friedmann, N. (2017). Linguistics. In R.G. Schwartz (Ed.) The Handbook of Child Language Disorders 2nd edition. New York, NY: Routledge.

Schwartz, R.G., & Auza, A. (2017). Introduction to language development and disorders in Spanish-speaking children. A.B. Auza, & R.G. Schwartz (Eds.). Language Development and Disorders in Spanish-speaking Children. Dordrecht: Springer.

American Speech-Language Hearing Association (ASHA)

ASHA is the governing body of speech-language pathologists, audiologists, and speech/hearing scientists in the United States and an advocacy group for people with speech, language, and hearing disorders. Their site provides helpful resources on child language disorders, hearing loss, and professional issues.

Child Language Data Exchange System (CHILDES)

CHILDES is the child language component of the TalkBank system. TalkBank is a system for sharing and studying conversational interactions.

Tobii

Tobii Technology specializes in eye tracking and eye control. This technology makes it possible for computers to know exactly where users are looking.

Software

E-prime 2

E-Prime® is a suite of applications to fulfill all of your computerized experiment needs. Used by more than 15,000 professionals in the research community, E-Prime® provides a truly easy-to-use environment for computerized experiment design, data collection, and analysis. E-Prime® provides millisecond precision timing to ensure the accuracy of your data. E-Prime's flexibility to create simple to complex experiments is ideal for both novice and advanced users.

Neuroscan

Neuroscan is a world leading developer of software and hardware for Neuroscience applications.

Cognition and Language Laboratory

Research in the Cognition and Language Laboratory (CoLa Lab) focuses on the interaction among different cognitive functions and language processes in children and adults. The main goal is to examine how various cognitive functions, such as working memory, interference control, and attentional capacity impact language comprehension and production in different populations. We examine specific cognitive control processes and consider how these develop in monolingual and bilingual individuals, how they may interact with language processing, and how they are affected in specific clinical populations, such as children with developmental language disorder (DLD) and autism. This research is based primarily on behavioral testing including online tasks, neuropsychological measures, statistical modeling, and on physiological approaches, such as pupillometry.

Location: 7307

Mission Statement:

Research in the Cognition and Language Laboratory (CoLa Lab) focuses on the interaction among different cognitive functions and language processes in children and adults. The main goal is to examine how various cognitive functions, such as working memory, interference control, and attentional capacity impact language comprehension and production in different populations. We examine specific cognitive control processes and consider how these develop in monolingual and bilingual individuals, how they may interact with language processing, and how they are affected in specific clinical populations, such as children with developmental language disorder (DLD) and autism. This research is based primarily on behavioral testing including online tasks, neuropsychological measures, statistical modeling, and on physiological approaches, such as pupillometry.

Current Projects:

  • The effect of the bilingual experience on cognitive control performance
    • Language and task switching in bilingual preschool age children
  • The joint effects of bilingualism and language impairment on cognitive control
  • The effect of working memory updating training on various language and cognitive abilities
  • Interference control in children with autism spectrum disorders and developmental language disorder
  • Cognitive control, motivation, and effort exertion in children with and without developmental language disorder: A pupillometric approach
  • Lexical ambiguity resolution and interference control in bilingual and monolingual children with and without developmental language disorder
  • How does the relationship between cognitive control and language change over the course of the lifespan?

 
Principal Investigator:
Klara Marton, Ph.D.
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309
 
Email: kmarton@gc.cuny.edu

To learn more information or to participate in our studies, visit the lab website.

Cola Lab Logo Speech-Language-Hearing Sciences

 

 

Director  
Klara Marton

Klara Marton, Ph.D. Klara Marton is a neuropsychologist, who has a doctorate in Developmental Psychology and a Ph.D. in Speech and Hearing Sciences. She is interested in the development and changes of language and cognition that occur across lifetime in different clinical populations and in individuals who speak different languages. Her research focuses on the interactions among various cognitive functions, such as working memory, inhibition and attention control and on additional processes that underlie language comprehension and production in monolingual and bilingual speakers.

Learn more about Dr. Klara Marton
Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Jessica Scheuer MS, CCC/Speech-Language-Pathology, doctoral student; Research interests: Executive function development, ADHD, and emerging narrative skills.
Headshot for the Speech-Language-Hearing Sciences program
Lia Pazuelo graduated from Kean University with an M.A. in Speech-Language Pathology. She also holds an M.A. in Psychology from John Jay College. She is interested in studying the connection between non-linguistic cognitive abilities and language production and comprehension, and ultimately, finding a way to transfer this knowledge to the clinical work of speech-language pathology. Currently she is studying the interaction of processing ambiguous language and working memory. She loves spending her free time watching classical movies and traveling.
Headshot for the Speech-Language-Hearing Sciences program
Yasmine Ouchikh is a Ph.D. student who graduated from the City College of New York in 2014 with her BA in Psychology. She is interested in the effects of bilingualism on speech perception and executive functioning in young and old adults. She is also interested in the neurolinguistics of bilingualism, neuroplasticity, cognition, and information overload.
Headshot for the Speech-Language-Hearing Sciences program
Zhamilya Gazman is a Ph.D. student who is interested in emotion regulation, bilingualism, working memory. She graduated with an MS in Neuroscience and Education (Teachers College, Columbia University, 2016).
Headshot for the Speech-Language-Hearing Sciences program
Thorfun Gehebe is a speech-language pathologist with clinical experience in child language impairments, speech sound disorders, augmentative alternative communications, fluency disorders, voice disorders, and dysphagia. She graduated from the College of Saint Rose with her B.S. and M.S. in Communication Sciences and Disorders. Her research interests are cross-linguistic studies in developmental language disorders/specific language impairment, cognitive control, and bilingualism.
Headshot for the Speech-Language-Hearing Sciences program
Bernadette Ojukwu joined the PhD program in Speech Language and Hearing Sciences in the Fall of 2015. Before starting the program, she interned in a psycholinguistics lab at the University of Massachusetts at Amherst, the same institution at which she received her bachelor’s in Biology. She has a variety of research interests ranging from syntax to pragmatics. She is also interested in understanding how cognition plays a role in language processing.
Headshot for the Speech-Language-Hearing Sciences program
Katherine Paulino is a bilingual speech-language pathologist. She received her Undergraduate and Masters degree in Speech-Language and Hearing at Lehman College in Bronx, NY. Katherine has performed diagnostics evaluations and intervention for varied populations (e.g. early childhood, school age, and adult) with a variety of communication disorders. She has also worked with individuals with Autism Spectrum Disorder, articulation and phonological disorders, Aphasia, language learning disabilities, and hearing loss. Katherine currently works with Spanish-English bilingual children with developmental language delays within a classroom setting. Her research interests entail developmental language impairments, working memory, executive functions, and resistance to interference.
Headshot for the Speech-Language-Hearing Sciences program
Marcy Gordon is a speech-language pathologist. She is interested in working memory, SLI, and dyslexia.
Headshot for the Speech-Language-Hearing Sciences program
Rula Faour is interested in diglossia, code-switching, and cognition.
Headshot for the Speech-Language-Hearing Sciences program
Stephani Feinstein is a Ph.D. student and her research interests include cognitive control, bilingualism, and language development in children. She graduated with a B.S. in Business Administration from the Haas School of Business at UC Berkeley.
Visiting Scholars  
Headshot for the Speech-Language-Hearing Sciences program
Nathalie Loiseau is a visiting PhD student in Interpreting Studies from the University of Geneva with a grant from the Swiss National Science Foundation, and a free-lance conference interpreter. Her current research interests include aspects of multitasking in interpreters and orchestra conductors as well as language testing for interpreting purposes. She has been a pedagogical assistant in interpreting at FTI, Geneva, from 2010 to 2013, and recently graduated from the MAS in interpreter training.
Alumni
  • Andrea Benavides, MS, 2020
  • Deepti Wadhera, PhD, 2020
  • Junga Kim, PhD, 2020
  • Luca Campanelli, PhD, 2020
  • Jungmee Yoon, PhD, 2017
  • Naomi Eichorn, PhD, 2014
Past Volunteers
  • 2019 Nicole Castro (research assistant)
  • 2018 Karen Wilkins (research assistant)
  • 2013 Karen Cardenas (research assistant)
  • 2013 Marissa Chapler (research assistant)
  • 2013 Michelle D’Alleva (research assistant)
  • 2011 Jacquelyn Baker (research assistant)
  • 2010 Joseline Cruz (research assistant)
  • 2009 Julie Leokumovich (research assistant)
  • 2009 Ingrid Puglik (research assistant)
  • 2009 Oksana Savuk (research assistant)

Journal articles


Marton, K., & Scheuer, J. (2020). The relationship between proceduralization and cognitive control. Journal of communication disorders, 83, 105941.

Eichorn, N., Pirutinsky, S., & Marton, K. (2019). Effects of different attention tasks on concurrent speech in adults who stutter and fluent controls. Journal of Fluency Disorders, 61, 105714.

Lerman, A., Pazuelo, L., Kizner, L., Borodkin, K., & Goral, M. (2019). Language mixing patterns in a bilingual individual with non-fluent aphasia. Aphasiology, 33(9), 1137-1153.

Marton, K., Gehebe, T. & Pazuelo, L. (2019). Cognitive control along the language spectrum: From the typical bilingual child to language impairment. Seminars in Speech & Language, 40 (4), 256-271.

Eichorn, N., Marton, K., & Pirutinsky, S. (2018). Cognitive flexibility in preschool children with and without stuttering disorders. Journal of Fluency Disorders, 57, 37-50.

Fortunato-Tavares, T., Schwartz, R. G., Marton, K., de Andrade, C. F., & Houston, D. (2018). Prosodic boundary effects on syntactic disambiguation in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 61(5), 1188-1202.

Marton, K., Kovi, Zs., & Egri, T. (2018). Is interference control in children with specific language impairment similar to that of children with autistic spectrum disorder? Research in Developmental Disabilities, 72, 179-190.

Zakariás, L., Keresztes, A., Marton, K., & Wartenburger, I. (2018). Positive effects of a computerised working memory and executive function training on sentence comprehension in aphasia. Neuropsychological Rehabilitation, 28(3), 369-386.

Marton, K., Goral, M., Campanelli, L., Yoon, J., & Obler, L. K. (2017). Executive control mechanisms in bilingualism: Beyond speed of processing. Bilingualism: Language & Cognition, 20(3), 613-631.
 
Eichorn, N., Marton, K., Schwartz, R. G., & Melara, R. (2016). Does working memory enhance or interfere with speech fluency in stuttering and fluent speakers? Evidence from a dual task paradigm. Journal of Speech, Language, & Hearing Research, 59, 415-429.
 
Marton, K. (2016). Executive control in bilingual children: Factors that influence the outcomes. Linguistic Approaches to Bilingualism, 6(5), 575-589.
 
Marton, K., Eichorn, N., Campanelli, L., & Zakarias, L. (2016). Working memory and interference control in children with specific language impairment. Language and Linguistics Compass, 10(5), 211-224.
 
Wagner, M., Roychoudhury, A., Campanelli, L., Shafer, V. L., Martin, B., & Steinschneider, M. (2016).  Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP). Neuroscience Letters, 614, 119–126.
 
Szollosi, I. & Marton, K. (2016). Interference control in aphasia. Psychologia Hungarica Caroliensis, 169-187.
 
Goral, M., Campanelli, L., & Spiro III, A. (2015). Language dominance and inhibition abilities in bilingual older adults. Bilingualism: Language and Cognition, 18(1), 79–89.
 
Marton, K. (2015). Theoretically driven experiments may clarify questions about the bilingual advantage. Bilingualism: Language and Cognition, 18(1), 37-38.
 
Yoon, J., Campanelli, L., Goral, M., Marton, K., Eichorn, N., & Obler, L. K. (2015). The effect of plausibility on sentence comprehension among older adults and its relation to cognitive functions. Experimental Aging Research, 42, 272-302.
 
Eichorn, N., Marton, K. Campanelli, L., & Scheuer, J. (2014). Verbal strategies and nonverbal cues in school-age children with and without specific language impairment. International Journal of Language and Communication Disorders, 49(5), 618-630.          
 
MacRoy-Higgins, M., Schwartz, R. G., Shafer, V. L., Marton, K. (2014). The influence of phonotactic probability on word recognition in toddlers. Child Language Teaching and Therapy, 30(1), 117-130.
Marton, K. (2014). Participation of Children and Adults With Disability in Participatory and Emancipatory Research. Educational Science/Neveléstudomány, 2014(2), 23-32.    
 
Marton, K., Campanelli, L., Eichorn, N., Scheuer, J., & Yoon, J. (2014). Information processing and proactive interference in children with and without specific language impairment (SLI). Journal of Speech, Language, and Hearing Research,   57, 106-119.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4091676/
 
Marton, K. & Eichorn, N. (2014). Interaction between working memory and long-term memory: A study in children with and without language impairment. Zeitschrift für Psychologie, Special Issue: Applied Memory Research, 222(2), 90-99.
 
Marton, K., Kövi, Z., Farkas, L., & Egri, T. (2014). Everyday functions and needs of individuals with disability: A reliability and validity study based on the principles of the ICF. Psychiatria Hungarica: A Magyar Pszichiatriai Tarsasag tudomanyos folyoirata, 29(4), 398-409.
 
MacRoy-Higgins, M., Schwartz, R. G., Shafer, V. L., Marton, K. (2013). Influence of phonotactic probability/neighbourhood density on lexical learning in late talkers. International Journal of Language & Communication Disorders, 48(2), 188-199.
 
Marton, K., Campanelli, L., Scheuer, J., Yoon, J., & Eichorn, N. (2012). Executive function profiles in children with and without specific language impairment. Journal of Applied Psycholinguistics, XII(3), 57-73. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4188414/
 
Leydon, C., Wroblewski, M., Eichorn, N., & Sivasankar, M. (2010). A meta-analysis of outcomes of hydration intervention on phonation threshold pressure. Journal of Voice, 24(6), 637-643.
 
Marton, K., Campanelli, L., & Farkas, L. (2011). Grammatical sensitivity and working memory in children with language impairment. Acta Linguistica Hungarica, 58(4), 448-466.
 
Marton, K. (2009). Imitation of body postures and hand movements in children with specific language impairment. Journal of Experimental Child Psychology, 102(1), 1-13.
 
Marton, K. (2008). Visuo-spatial processing and executive functions in children with specific language impairment. International Journal of Language and Communication Disorders, 43(2), 181-200.

Campanelli L., Iberni E., Sarracino D., Degni S., & Mariani R. (2007). Semiotics of the nonverbal vocal expression of emotions and research into the psychotherapy process: A pilot study. Rivista di Psicologia Clinica, 1/2007, 102-115.
 
Marton, K., Kelmenson, L., & Pinkhasova, M. (2007). Inhibition control and working memory capacity in children with SLI. Psychologia, 50, 110-121.
 
Marton, K., Schwartz, R. G., Farkas, L., & Katsnelson, V. (2006). The effect of sentence length and complexity on working memory performance in Hungarian children with specific language impairment (SLI): A cross-linguistic comparison. International Journal of Language and Communication Disorders, 41(6),653-673.
 
Marton, K. (2006). Do nonword repetition errors in children with specific language impairment (SLI) reflect a weakness in an unidentified skill specific to nonword repetition or a deficit in simultaneous processing? Applied Psycholinguistics, 27(4), 569-573.

Marton, K., Abramoff, B., & Rosenzweig, S. (2005). Social cognition and language in children with Specific Language Impairment (SLI). Journal of Communication Disorders, 38(2), 143-162.
 
Marton, K., Abramoff, B., & Rosenzweig, S. (2005). Social cognition and language in children with Specific Language Impairment (SLI). Journal of Communication Disorders, 38 (2), 143-162.
 
Marton, K. & Schwartz, R. G. (2003). Working memory capacity limitations and language processes in children with specific language impairment. Journal of Speech, Language, and Hearing Research, 46, 1138-1153.
 
Schaeffer, N. & Eichorn, N. (2001). The effects of differential vowel prolongations on perceptions of speech naturalness. Journal of Fluency Disorders, 26(4), 335-348.
 
Goffman, L., Schwartz, R. G., & Marton K. (1996). Information Level and Young Children’s Phonological Accuracy. Journal of Child Language, 23, 337-347.

Books and Book Chapters


Kim, J. Marton, K., & Obler, L.K. (2019). Interference control in bilingual auditory sentence processing in noise. In. I. A. Sekerina, L. Spradlin, & V. Valian, (Eds.). Bilingualism, executive function, and beyond: Questions and insights (pp. 281-293). John Benjamins Publishing Company.

Marton, K. (2019). Executive control in bilingual children: Factors that influence the outcomes. I.A. Sekerina, L. Spradlin, & V. Valian, (Eds.). Bilingualism, executive function, and beyond: Questions and insights (pp. 281-293). John Benjamins Publishing Company.

Marton, K. & Gazman, Z. (2019). Interactions among speed of processing, cognitive control, age, and bilingualism. In. I.A. Sekerina, L. Spradlin, & V. Valian, (Eds.). Bilingualism, executive function, and beyond: Questions and insights (pp. 281-293). John Benjamins Publishing Company.

Gurland, G. & Marton, K. (2018). Assessment of school age language/literacy disorders. In. In. C. Stein-Rubin & R. L. Fabus (Eds.). A Guide to Clinical Assessment and Professional Report Writing in Speech-Language Pathology (2nd ed., pp. 223-244), Delmar Cengage Learning.

Marton, K. & Yoon, J. (2014). Cross-linguistic investigations of language impairments. In. P. Brooks & V. Kempe (Eds.). Encyclopedia of Language Development (pp. 123-127). Sage Reference.

Eichorn, N. & Fabus, R. Assessment of fluency disorders in children and adults. (2012). In C.Stein & R. Fabus (Eds.)  A Resource Manual for the Assessment of Children and Adults with Communication Disorders (pp. 347-398). Clifton Park, NY: Delmar Publications/Cengage Learning.

Gurland, G. & Marton, K. (2012). Assessment of school-age language/literacy disorders. In: C. Stein-Rubin & R. Fabus & (Eds.). A Guide to Diagnostic Assessment and Professional Report Writing in Speech-Language Pathology (pp. 254-304). Delmar/Cengage Learning.

Schwartz, R. G. & Marton, K. (2011). Articulatory and Phonological Disorders. In: Anderson, N. B. & Shames, G. H. (Eds.). Human Communication Disorders: An Introduction. Allyn & Bacon Publ., 141-185.

Marton, K. (2011). Interaction between flexible cognition and language comprehension in children with and without language impairment. In. In K.L. Meinken (Ed.) Encyclopedia of Language and Linguistics, Nova Science Publishers, Inc., Hauppage, NY

Marton, K. (2009). Interaction between flexible cognition and language comprehension in children with and without language impairment. In. M. A. Reed (Ed.). Children and Language: Development, Impairment and Training. Nova Science Publishers, Inc., Hauppage, NY, 147-171.

Marton, K. & Wellerstein, M. (2008). What can social psychology gain from and offer to children with specific language impairment: Social perception of the self and others. In J. B. Teiford (Ed.). Social Perception: 21st Century Issues and Challenges. New York: Nova Publishing, pp. 103-124.

Conference Proceedings


Campanelli, L., Van Dyke, J., & Marton, K. (2018). The modulatory effect of expectation on memory retrieval during sentence comprehension. In. T. T. Rogers, M. Rau, X. Zhu, C. W. Kalish (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society, 1434-1439, Madison, WI. ISBN: 978-0-9911967-8-4
https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=1513&context=gc_pubs

Wadhera, D., Campanelli, L., & Marton, K. (2018). The influence of bilingual language experience on working memory updating performance in young adults. In. T. T. Rogers, M. Rau, X. Zhu, C. W. Kalish (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society, 2639-2644, Madison, WI. ISBN: 978-0-9911967-8-4

Eichorn, N. & Marton, K. (2015). When less can be more: Dual task effects on speech fluency. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp.). Austin, TX: Cognitive Science Society
 
Zakariás, L., Keresztes, A., & Marton, K. (2014). Positive effects of computerized executive function training in aphasia. A pilot study. Stem- Spraak- en Taalpathologie, Supplement, 178-181. 15th International Science of Aphasia Conference

Marton, K. & Kelmenson, L. (2007). The impact of inhibition control on working memory in children with SLI. Proceedings of the EuroCogSci07. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. pp. 481-485.

Marton, K., Schwartz, R. G., & Braun, A. (2005). The effect of age and language structure in working memory performance. In Bara, B. G., Barsalou, L., & Bucciarelli, M. (Eds.). Proceedings of the XXVII. Annual Meeting of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. pp. 1413-1418. [external link: http://csjarchive.cogsci.rpi.edu/Proceedings/2005/index.htm

Selected Conference Presentations and Posters


Marton, K., Gehebe, T., Scheuer, J., Pazuelo, L., Paulino, K. (2019). Cognitive control in bilingual children with language impairment. Presented at the American Speech-Language-Hearing Association (ASHA) Convention, Orlando, FL.

Gazman, Z., Pazuelo, L., Scheuer, J., Campanelli, L., Ouchikh, Y., & Marton, K. (2019). Best practices in assessing language proficiency in bilingual children with and without DLD. Presented at the International Symposium on Bilingualism 12. Edmonton, Alberta, Canada.
Gehebe, T., Wadhera, D., Kim, J., & Marton, K. (2019). Bilingual young adults’ proficiency: Does modality matter? Presented at the International Symposium of Bilingualism 12. Edmonton, Alberta, Canada.

Kim, J., Pisano, T.S., Marton, K., Martin, B.A., & Obler, L.K. (2019). Different interference control mechanisms underlying L2 auditory sentence comprehension in listeners with high and mid L2 proficiency. Presented at the International Symposium on Bilingualism 12, Edmonton, Alberta, Canada.

Yoon, J., Marton, K. & Obler, L.K. (2019). Language proficiency, lexical knowledge, and the bilingual language context. Presented at the International Symposium on Bilingualism 12, Edmonton, Alberta, Canada.

Campanelli, L., Van Dyke, J. A., & Marton, K. (2018). The interaction between memory retrieval and expectations during sentence processing. Poster session presented at the 24th Architectures and Mechanisms of Language Processing (AMLaP) conference, Berlin, Germany.

Marton K. & Campanelli, L. (2018). Associations between language proficiency and cognitive control in children: The application of a drift-diffusion model. Poster session presented at the 59th Annual meeting of the Psychonomic Society, New Orleans, LA.

Marton K., Pazuelo, L., Ouchikh. Y., Gehebe, T., & Scheuer, J. (2018). Cognitive research in children with SLI: From laboratory to clinical practice. Presented at the Annual Convention of the American Speech, Language, & Hearing Association, Boston, MA

Marton, K., Scheuer, J., Ouchikh, Y., Yerimbetova, Z. & Pazuelo, L. (2018). Joint effects of bilingualism and specific language impairment: Interaction between speed of processing and cognitive control. Presented at the 2nd BiSLI Conference, University of Reading, England.

Ouchikh, Y., Pazuelo, L., Yerimbetova, Z., Scheuer , J. & Marton, K. (2018). Working memory updating and interference control in mono- and bilingual children with SLI. Presented at the 2nd BiSLI Conference, University of Reading, England.

Ouchikh, Y., Yerimbetova, Z., Aramridth, T., Scheuer, J., & Marton, K. (2018). Working memory updating and interference control in children with SLI. Poster session presented at the 39th Symposium on Research in Child Language Disorders (SRCLD), Madison, Wisconsin.

Pazuelo, L., Campanelli, L., Ouchikh, Y., Aramridth, T., Scheuer , J., & Marton, K. (2018). Interference during language comprehension of ambiguous sentences in bilingual and monolingual children with SLI. Presented at the 2nd BiSLI Conference, University of Reading, England.

Pazuelo, L., Marton, K., Campanelli, L., & Scheuer, J. (2018). Interference and the use of visual cues during language comprehension in children with SLI and TLD. Poster session presented at the 39th Symposium on Research in Child Language Disorders (SRCLD), Madison, Wisconsin.

Scheuer, J., Campanelli, L., & Marton, K. (2018). Interference control in children with autism spectrum disorder and specific language impairment. Poster session presented at the 39th Symposium on Research in Child Language Disorders (SRCLD), Madison, Wisconsin.

Wadhera, D. & Marton, K. (2018). The influence of bilingual language experience on resistance to interference in young adults: Looking beyond spoken language proficiency. Poster session presented at the 59th Annual meeting of the Psychonomic Society, New Orleans, LA, p. 239.

Fortunato-Tavares, T., Marton, K., Houston, D., Andrade, C., & Schwartz, R. G. (2017). The use of stress in sentence comprehension by children with cochlear implants. Presented at the Annual Convention of the American Speech, Language, & Hearing Association, Los Angeles, CA.
 
Fortunato-Tavares, T., Schwartz, R.G., de Andrade, C., Marton, K., & Houston, D. (2017). The effects of prosody on syntactic disambiguation in children with cochlear implants. Presented at the International Association of Child Language, Lyon, France.

Marton, K., Campanelli, L., & Yerimbetova, Z. (2017). Beyond categories: Estimating cognitive control skills using a range of L2 proficiency levels. Presented at the 11th International Symposium on Bilingualism, Limerick, Ireland.

Pazuelo, L., Yerimbetova, Z., Ouchikh, Y., Scheuer, J., & Marton, K. (2017). Joint effects of bilingualism and language impairment on cognitive control. Presented at the 11th International Symposium on Bilingualism, Limerick, Ireland.

Wadhera, D. & Marton, K. (2017). The effect of bilingual language proficiency on individual differences in resistance to interference. Presented at the 11th International Symposium on Bilingualism, Limerick, Ireland.

Wadhera, D., Yoon, J., & Marton, K. (2017). Variations in individual bilingual experiences affect interference control in working memory updating. Presented at the 11th International Symposium on Bilingualism, Limerick, Ireland.

Eichorn, N., Marton, K., & Pirutinsky, S. (2016). Don’t pay attention? Interactions between speech fluency & attention in stuttering & fluent adults. Presented at the Annual Convention of the American Speech, Language, & Hearing Association, Philadelphia, PA.
 
Eichorn, N., Marton, K., & Pirutinsky, S. (2016). Cognitive strategies used by preschoolers with & without stuttering disorders on an attention-switching task. Presented at the  Annual Convention of the American Speech, Language, & Hearing Association, Philadelphia, PA.
 
Fortunato-Tavares,T., Andrade, C., Houston, D., Marton, K., & Schwartz, R. (2016). Effects of prosody on sentence comprehension in children with cochlear implants. Presented at the Annual Convention of the American Speech, Language, & Hearing Association, Philadelphia, PA.
           
Marton, K. (2016). Interference control in children with specific language impairment. Presented at the European Child Language Disorder, EUCLDIS, Central European University, Budapest.
 
Campanelli, L., Van Dyke, J. & Marton, K. (2016). Investigating the modulatory effect of expectations on memory retrieval during sentence comprehension. Presented at the 29th Annual CUNY Conference on Human Sentence Processing - University of Florida, Gainesville, Florida.

Developmental Neurolinguistics Laboratory

The Developmental Neurophysiology Laboratory (Professor Shafer) is fully equipped with state of the art equipment and software for stimulus creation, delivery, experimental control, and electrophysiological data acquisition and processing.  This includes a 128-channel amplifier (Geodesic, Inc.), Geodesic experimental control software and Eprime for stimulus delivery and electrophysiolgical data acquisition installed on two computers.  The system is particularly well-suited for investigations of infant and child populations because the Geodesic electrode nets do not require abrasion of the skin. Two IAC 9’ by 9’ sound-shielded booths are in the lab area, one for electrophysiological testing. The laboratory has 10 desktop computers, hooked up to the university network, for student use. The five-room suite (1,400 square feet) includes space for the ten students, visiting postdoctoral trainees, and collaborating faculty. One room of the suite is used for behavioral testing and observation of infants and children and is supplied with a variety of toys and activities. The laboratory has weekly meetings, which are attended by approximately ten to 15 doctoral students and post-doctoral collaborators.

Lab Meetings: Tuesday 11:00am - 1:00pm
Location: Room 7392

Mission Statement:

The goal of the Developmental Neurolinguistic Lab is to understand the relationship between language and brain development, and later brain organization. 

Research projects that are currently in progress use electrophysiological methods to examine brain processes. An understanding of the relationship between language and brain development and later brain organization will help explain the nature of developmental language disorders. 

Within this site, you can find descriptions of the electrophysiological techniques that we use in ongoing studies. We are currently examining the neurophysiology of language learning in monolingual and bilingual children with typical and atypical language development, particularly children with specific language impairment (SLI) and children with Autism Spectrum Disorders (ASDs). We are also studying the language development of infants, specifically their abilities to discriminate and learn speech sounds (for example, ‘ba’ and ‘pa’), and word patterns. We use a method with the EEG (brainwaves) to see what is happening in a baby’s or child’s brain while he/she listens to sounds. We use a special net of electrodes to pick up brainwave activity from a baby’s or child’s brain. The net takes about 5 minutes to put on and does not hurt. 

The participants in these studies include infants and children with typical language development, children with SLI, and children with ASD. We recruit both monolingual and bilingual populations in many of our studies.

Principal Investigator:

Dr. Valerie Shafer, Ph.D. 
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309

Phone: 212-817-8805
Fax: 212-817-1537
email: vshafer@gc.cuny.edu

Faculty  
Valerie Shafer photo faculty
Valerie L. Shafer, Ph.D., lab director, is a full Professor in the Ph.D. Program in Speech and Hearing Sciences at The Graduate School and University Center of the City University of New York. Her research focuses on the neurophysiological basis of speech perception. She is also interested in early identification of language disorders, and neurophysiological basis of language processing in children with autism spectrum disorders.
Graduate Students  
  Jason Rosas, M.S., CCC-SLP received his Bachelor’s degree in Psychology with Honors from the University of Pennsylvania and a Master’s degree in Speech-Language Pathology from Columbia University. He is a licensed Speech-Language Pathologist with 10 years experience. Jason works at the Beth Israel Medical Center and has worked at Long Island University as an Adjunct Professor & Clinical Supervisor. He has had extensive field experience working with a variety of bilingual populations from early intervention to geriatric adults with specializations in bilingual language disorders, literacy development, and swallowing and feeding disorders. He has also received certification in the Orton-Gillingham multi-sensory approach to reading instruction. Jason is a matriculated student in the Ph.D. program at the Graduate School and University Center of the City University of New York, within the program of Speech, Language, and Hearing Sciences. He is a fourth year, Second Level student currently working in the Developmental Neurolinguistics Laboratory, directed by Dr. Valerie Shafer and formerly the Speech Acoustics Perception Laboratory, directed by Emeritus Prof. Dr. Winifred Strange. His interests are in speech perception, dyslexia, and bilingual language-learning across the life-span.
Headshot for the Speech-Language-Hearing Sciences program
Sarah Kresh is a doctoral candidate in the Linguistics Program of the Graduate Center, CUNY. Her main research interest is early childhood grammatical acquisition in L1, multidialectal and multilingual contexts. Her dissertation project looks at production, comprehension and processing of subject-verb agreement in English-acquiring preschoolers. In the lab, she has also worked on projects on L2 phonology, the role of orthography in phonological processing, and language processing in children with Autism Spectrum Disorder.
Headshot for the Speech-Language-Hearing Sciences program
Stanley Chen is a doctoral student in the lab. He received his dual BA from National Chengchi University, focusing on linguistics in English and Historical Chinese and an MSEd from the University of Pennsylvania in TESOL. Recently, he has completed his MA in linguistics at the GC, working on filler-gap dependencies in Mandarin as his thesis project. His main interests include bilingualism, psycholinguistics, and neurolinguistics.
Recent Graduates  
  Vikas Grover, CCC-SLP Vikas Grover is a certified Speech Language Pathologist and currently pursuing Ph.D in Speech Language and Hearing Sciences. He is interested in language acquisition, language selection and delayed acquisition of speech and language in bilingual children. He wants to learn and contribute to the research in bilingualism and child language acquisition.
Headshot for the Speech-Language-Hearing Sciences program
Nancy Vidal Nancy S.Vidal, Ph.D. student, was born in Colombia, S.A., a fluent bilingual, with research interests in Speech Perception and its relationship to literacy, Neurophysiology of Speech and Language in the monolingual and bilingual population. My professional goals are to contribute to our research and apply what I learn to the clinical population I treat.
Headshot for the Speech-Language-Hearing Sciences program
Yan Yu is a 2013 graduate of the Developmental Neurolinguistic Laboratory. She is currently a faculty member at St. John’s University. Her primary research interest pertains to the neurophysiology of speech and language processing in bilingual language development.
Headshot for the Speech-Language-Hearing Sciences program
Emily Zane worked in the DNL lab from 2009 to 2015. In the lab, she collaborated on several projects, including one that examined electrophysiological brain responses to language in minimally verbal children and adolescents with Autism Spectrum Disorder (ASD). She is now a postdoctoral researcher at the FACE Lab at Emerson College in Boston, where she continues to work with children and adolescents with ASD. In general, she is interested in how the processing of extralinguistic factors, like visual information and world knowledge, affect the processing and comprehension of language in typical populations and in populations with language disorders. You can read more about her interests and current projects at http://facelab.emerson.edu/.
Headshot for the Speech-Language-Hearing Sciences program
Judith Iannotta is a doctoral student in the Speech & Hearing Sciences Program at the CUNY Graduate Center. She earned her Bachelor’s Degree in Speech-Language Pathology at Hofstra University and Master’s Degree in Speech-Language Pathology at St. John’s University. As a clinician she has worked in hospital and school settings with children and adults who have developmental disabilities, brain injury and neurogenic disorders. Judith is an Adjunct Instructor at St. John’s University where she teaches in the Department of Communication Sciences and Disorders. She is a member of the Developmental Neurolinguistic Lab directed by Dr. Valerie Shafer. Her research interests focus on the neurodevelopment of language for typical and clinical populations. In her free time she enjoys artistic expression in many forms including painting, pottery and writing.
Headshot for the Speech-Language-Hearing Sciences program
Eve Higby worked in the Developmental Neurolinguistics Lab (with Dr. Shafer) and the Neurolinguistics Lab (with Dr. Obler) and graduated in 2016. Her dissertation was titled "Native language adaptation to novel verb argument structures by Spanish-English bilinguals: An electrophysiological investigation." Eve is currently a postdoctoral research fellow in the department of Psychology at the University of California, Riverside.
  Anthea Vivona completed her Ph.D. in 2013. Her dissertation addressed the frequent word forms present in maternal infant-directed speech. Dr. Vivona is an ASHA certified and NYS licensed speech-language pathologist. Currently she provides clinical supervision for both intervention and assessment at the St. John's University Speech & Hearing Center and she teaches undergraduate and graduate classes in the Department of Communication Sciences & Disorders at St. John's. In addition, she is the faculty moderator for the SJU Chapter of the National Student Speech Language Hearing Association. Dr. Vivona’s clinical expertise is in the area of child language, specifically autism spectrum disorders. Her research interests continue to include infant-directed speech.
Headshot for the Speech-Language-Hearing Sciences program
Carol Tessel is an assistant Professor at Florida Atlantic University.
Her research involves bilingual and second language acquisition.
Headshot for the Speech-Language-Hearing Sciences program
Karen Garrido-Nag, M.S. CCC-SLP Karen is currently a third level student working on her dissertation experiment on the affects of attention on the mismatch response of infants. Her research interests include, Autism, attention, neurophysiology and language development. Karen received her Master of Science Degree in Speech and Language Pathology from Gallaudet University and is currently working there as an Instructor.
Headshot for the Speech-Language-Hearing Sciences program
Hia Datta I am a 2010 graduate of the Developmental Neurolinguistcs Lab and Neurolinguistics Lab, having worked with co-advisors Dr. Valerie Shafer and Dr Loraine Obler. Now, I am a postdoctoral associate with Dr. Jason Zevin at the Sackler Institute of Developmental Psychobiology, at the Weill Cornell Medical Center. My research interests in general is to learn about language processing in the human brain over its lifepan. Specifically, I am interested in understanding how and which neural networks engage in learning multiple languages during infancy, childhood and adult years. In graduate school I learnt how to use event related potentials to understand neurophysiological mechanisms of language, and currently I am learning other methods such as fRMI and mousetracking to explore the same.
Headshot for the Speech-Language-Hearing Sciences program
Monica Palmieri Wagner, MA CCC My primary research interest pertains to speech processing in the cortex, specifically sound and word processing. As a speech and language pathologist, I am interested in the normal development of speech and language. Only through further understanding of the basic process in normal development can we begin to understand the underlying cause(s) of specific language impairment. I believe there is an essential need for researchers interested in speech and language processing to have an in-depth knowledge of auditory processing in the cortex. At this time, only through the study of animal research in addition to human research can we gain an understanding of the complicated issues involved in processing in auditory cortical networks. Currently I am working on a research project that compares sound processing in two groups (monolingual English listeners and native Polish listeners) having different native language experience. The goal is to learn the effects of the native language experience on the perception of legal and illegal phonotactic structures. Also, I question whether there are categories of sounds in auditory processing. Because sounds differ in word onset and word final, the monolingual English listener in the current experiment may not perceive phonetic distinctions in word onset that they perceive in word final, demonstrating that the phoneme is not pertinent in processing. Cortical auditory processing is largely an unknown and fascinating research area. I am currently teaching Anatomy and Physiology of Speech at the undergraduate level. I hope in that endeavor, I am instilling my awe of the process of speech and language. For fun, I cook and enjoy getting people together to socialize. My husband has his dream Harley and taking road trips with him has become an unexpected hobby of mine.
  Miwako Hisagi I am Miwako Hisagi. My hometown is Tokyo in Japan. I graduated in 1999 from George Mason University (VA) with an M.A. in English (Linguistics) and a graduate certificate in Teaching English as a Second Language (TESL). I taught for four years in the Japanese immersion program for Fairfax County Public Schools in Virginia (full-time). I also taught Japanese at the College of William and Mary (VA) for two years and at Case Western Reserve University (OH) for a year as an Instructor of Japanese language (full-time). I joined to PhD program at CUNY (Speech and Hearing) in the fall of 2002. I am currently teaching an undergraduate course in Acoustic Phonetics at Lehman College, CUNY this semester. I also taught a graduate course in Research Seminar at Adelphi University last semester. My main areas of interest are cross-linguistic speech production and perception and ERPs study in speech perception. My dissertation title is: Perception of Japanese Temporally-cued Phonetic Contrasts by Japanese American English Listeners: Behavioral and Electrophysiological Measures. I am also involved in Dr. Winifred Strange’s Speech Acoustic and Perception Lab (SAPL) as well. I also join Dr. Loraine K. Obler’s Neurolinguistics Lab since I am interested in bilingualism and brain research as well.
  Michelle MacRoy-Higgins, M.S. CCC-SLP, TSHH. Hi, I am Michelle MacRoy-Higgins and I am a doctoral student in the Speech and Hearing Sciences department. I earned my Bachelor’s degree in Communication Sciences and Disorders at the State University of New York College at Geneseo and my Master’s degree in Communication Sciences and Disorders at Adelphi University in Garden City, NY. At Adelphi University, I assisted Dr. Lawrence Raphael and Dr. Florence Myers in research examining the acoustic and perceptual differences in speakers who clutter compared with normal speakers. I received my Certificate of Clinical Competence in Speech-Language Pathology after completing my Clinical Fellowship Year at Heart Share First Step Preschool, in Richmond Hill, NY. I worked clinically for several years in the Early Intervention/preschool population and I currently work as an Instructor and Clinical Supervisor at Hunter College in the Communication Sciences department. I am currently working on my dissertation which is examining the storage of phonological forms in children who are late talkers. My interests include typical and atypical language and phonological acquisition and Autistic spectrum disorders.
  Yael Neumann Hi! My name is Yael Neumann. I am a doctoral student in the Speech and Hearing Sciences department at the Graduate Center-CUNY. My general research focus is in the area of neurogenics with a primary concentration in lexical access for production. Currently, I am working on my dissertation project entitled: "The Brain Bases of Word Finding Problems in Healthy Younger and Older Adults". A common complaint among healthy older adults is the increased frequency of word-finding problems. Research points to breakdowns in phonological processing. The aim of this study is to investigate the effects of age on specific phonological substages of processing, namely, sound segments and syllables. An implicit naming task with event-related potentials is being used. Results will have direct implications for neurocognitive remediation aimed at strengthening weakened links of processing with age. As well, findings will serve as a foundation for investigations of clinical populations, e.g. aphasia, apraxia, etc. with difficulties in lexical retrieval for speech production. Additionally, I've been involved in two other projects: 1) an electrophysiology project with Dr. Valerie Shafer to identify how processing of regular vs. irregular verbs in sentences differs in the adult, and in both the typical and atypical developing child, and 2) a neurolinguistic study with Dr. Loraine K. Obler looking at how adults with either right or left brain-damage comprehend ‘vocal emblems’ or symbolic sounds, e.g. "Shh" for "Be quiet" and "Brr" for "It’s cold". In this project, our aim is to further neurolinguistic understanding of the representation of verbal and non-verbal sound patterns in the cerebral hemispheres. These projects are currently being written up for submission to journals. They have also been presented at various international and national conferences, e.g. The Science of Aphasia (Trieste, Italy), ASHA, and NYSSLHA. Clinically, I work as a speech-language pathologist at a Rehab Center where I assess and treat clients of all different ages with varied disorders (never gets boring!). My major clinical interests lie in aphasia, motor speech disorders, voice and fluency. Additionally, I supervise graduate students, CFY and TSSH clinicians, and teach both graduate and undergraduate neurogenically-based courses at various universities.
Headshot for the Speech-Language-Hearing Sciences program
Margaret T. Shakibai Margaret T. Shakibai has a BA in Speech-Language Pathology & Audiology from Marymount Manhattan College, 2001, Magna Cum Laude, and a MPhil in Speech and Hearing Sciences, CUNY Graduate Center, 2004 She is currently a Doctoral Candidate in the Speech and Hearing Sciences department, CUNY Graduate Center. Her research area is "The efficacy of a training program to teach kindergarteners to detect lexical ambiguities" Research Assistant, Developmental Neurolinguistics Laboratory, CUNY Graduate Center, 2001-2005 Adjunct Professor, Brooklyn College, Research Design, 2004-2005 Adjunct Professor, Department of Communication Sciences and Disorders, Marymount Manhattan College, Fall 2004-present. She has a daughter, Nadia Yasmeen Shakibai, born May 5, 2006 [see photo].
  Baila Tropper received her Bachelor’s degree in Speech Communication Sciences from Touro College and her Master’s degree in Speech-Language Pathology from Brooklyn College. Baila was the recipient of the Brooklyn College Speech and Hearing Center Project Award of 2005. She currently works as a New York State licensed Speech-Language Pathologist and Teacher of the Speech and Hearing Handicapped. Additionally, she holds a Certificate of Clinical Competence from the American Speech-Language Hearing Association. Baila’s experiences as a speech-language pathologist include servicing pediatric and geriatric populations in clinical, hospital, homecare, and public and private school settings. She currently works in an outpatient clinic in Brooklyn, specializing in the treatment of childhood language impairments. Baila is a Ph.D. student in the Speech and Hearing Sciences Program at the CUNY Graduate Center. She is a recipient of the CUNY Graduate Center Science Fellowship. Baila is presently the lab manager of the Developmental Language Laboratory, directed by Dr. Richard Schwartz. Her primary research interest is language processing in children with specific language impairment. Baila is currently collaborating with researchers from the Developmental Neurolinguistics Laboratory, where she uses electrophysiological methods to examine the brain mechanisms of normal and disordered language.
   

 

Research Highlights

Research highlight for the Speech-Language-Hearing Sciences

Selected Publications

Zane, E., & Shafer, V. (2018). Mixed metaphors: Electrophysiological brain responses to (un)expected concrete and abstract prepositional phrases. Brain Research, 1680, 77–92.https://doi.org/10.1016/j.brainres.2017.12.008[doi.org]

Wagner, M., Shafer, V. L., Haxhari, E., Kiprovski, K., Behrmann, K., & Griffiths, T. (2017). Stability of the Cortical Sensory Waveforms, the P1-N1-P2 Complex and T-Complex, of Auditory Evoked Potentials.Journal of Speech, Language, and Hearing ResearchJSLHR, 60(7), 2105–2115. https://doi.org/10.1044/2017_JSLHR-H-16-0056[doi.org]

Yu, Y.H.  Shafer V.L., Sussman E.S.  (2017). Neurophysiological and behavioral responses of Mandarin lexical tone processing. Frontiers in neuroscience. 2017; 11:95.https://www.ncbi.nlm.nih.gov/pubmed/28321179[ncbi.nlm.nih.gov]

Rinker, T., Shafer, V. L., Kiefer, M., Vidal, N., & Yu, Y. H. (2017). T-complex measures in bilingual Spanish-English and Turkish-German children and monolingual peers. PloS One, 12(3), e0171992.https://doi.org/10.1371/journal.pone.0171992[doi.org]

Hisagi M, Shafer VL, Miyagawa S, Kotek H, Sugawara A, Pantazis D.Second-Language Learning Effects on Automaticity of Speech Processing of Japanese Phonetic Contrasts: An MEG study.[ncbi.nlm.nih.gov] Brain Res. 2016 Oct 6. pii: S0006-8993(16)30697-7. doi: 10.1016/j.brainres.2016.10.004. [Epub ahead of print] PMID:27720855

Cantiani, C., Choudhury, N., Yu, Y., Shafer, V., Schwartz, R., & Benasich, A.A. (2016). From Sensory Perception To Lexical-Semantic Processing: An ERP Study In Non-Verbal Children With Autism.PLOS1. http://dx.doi.org/10.1371/journal.pone.0161637[dx.doi.org]PMID: 27560378

Rota-Donahue, C., Schwartz, R. G., Shafer, V., & Sussman, E. S. (2016). Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment.Journal of the American Academy of Audiology, 27(6), 489–497.https://doi.org/10.3766/jaaa.15122[doi.org].

Neumann Y[ncbi.nlm.nih.gov]Epstein B[ncbi.nlm.nih.gov]Shafer VL[ncbi.nlm.nih.gov] 2016. .Electrophysiological indices of brain activity to content and function words in discourse Int J Lang Commun Disord.[ncbi.nlm.nih.gov] 2016 Sep;51(5):546-55. doi: 10.1111/1460-6984.12230. Epub 2016 Mar 15. PMID: 26992119

MacRoy-Higgins, M., Shafer, V. L., Fahey, K. J., & Kaden, E. R. (2016). Vocabulary of Toddlers Who Are Late Talkers. Journal of Early Intervention, 38(2), 118–129.https://doi.org/10.1177/1053815116637620[doi.org]

Hisagi, M., Shafer, V.L., Strange, W. & Sussman, E. (2015). Neural measures of a Japanese consonant length discrimination by Japanese and American English listeners: Effects of attention.[ncbi.nlm.nih.gov].Brain Research 1626. pp. 218-231. PMID:26119918

Shafer VL, Yu YH, Wagner M (2015). Maturation of cortical auditory evoked potentials (CAEPs) to speech recorded from frontocentral and temporal sites: Three months to eight years of age.  Int J Psychophysiol. 95(2). 77-93. PMID: 25219893.

Shafer VL, Yu YH, Wagner M (2014). Maturation of cortical auditory evoked potentials (CAEPs) to speech recorded from frontocentral and temporal sites: Three months to eight years of age.  Int J Psychophysiol. Sep 13. pii: S0167-8760(14)01593-1. doi: 10.1016/j.ijpsycho.2014.08.1390. [Epub ahead of print]. PMID: 25219893
 
Näätänen R, Sussman ES, Salisbury D, Shafer VL. (2014). Mismatch negativity (MMN) as an index of cognitive dysfunction. Brain Topogr. Jul;27(4):451-66. doi: 10.1007/s10548-014-0374-6. Epub 2014 May 17. PMID: 24838819 [PubMed - in process]
 
Neumann Y, Epstein B, Yu YH, Benasich AA, Shafer V. 2014. An electrophysiological investigation of discourse coherence in healthy adults. Clin Linguist Phon. 2014 Apr 29. [Epub ahead of print]. PMID: 24779648 [PubMed - as supplied by publisher]
 
Epstein B, Shafer VL, Melara RD, Schwartz RG (2014). Can children with SLI detect cognitive conflict? Behavioral and electrophysiological evidence.  J Speech Lang Hear Res. Aug;57(4):1453-67. doi: 10.1044/2014_JSLHR-L-13-0234. PMID: 24686792 [PubMed - in process]
 
Hisagi M, Garrido-Nag  K,. Datta H. & Shafer, V.L. (2014)  ERP indices of vowel processing in Spanish-English bilinguals. Bilingualism: Language and Cognition. DOI: 10.1017/S1366728914000170, Published online: 04 June 2014 pp 1 - 19.
 
Epstein B, Hestvik A, Shafer VL, Schwartz RG . (2013). ERPs reveal atypical processing of subject versus object Wh-questions in children with specific language impairment. Int J Lang Commun Disord. 2013 Jul-Aug;48(4):351-65. doi: 10.1111/1460-6984.12009. PMID: 23889832.
 
Wagner M, Shafer VL, Martin B, Steinschneider M. (2013). The effect of native-language experience on the sensory-obligatory components, the P1-N1-P2 and the T-complex. Brain Res. 2013 Jul 19;1522:31-7. PMID: 23643857
  
MacRoy-Higgins, M., Schwartz, R. G., Shafer, V. L. and Marton, K. (2012). Influence of
phonotactic probability/ neighbourhood density on lexical learning in late talkers. International Journal of Language & Communication Disorders
 
Shafer, VL, Yu, YH & Garrido-Nag, K. (2012). Electrophysiological indices of vowel discrimination in monolingually and bilingually exposed infants:  Does attention matter. Neuroscience Letters..Sep 20;526(1):10-4. Epub 2012 Aug 9. PMID: 22897876
 
Wagner, M. Shafer, VL, Martin B & Steinschneider,M. (2012). The phonotactic influence on the perception of a consonant cluster /pt/ by native English and native Polish listeners: A behavioral and event related potential (ERP) study. Brain Lang. Oct;123(1):30-41. Epub 2012 Aug 4. PMID:  22867752
 
Shafer, V.L., Yu, Y & Datta, H. (2011). The Development of English Vowel Perception in Monolingual and Bilingual Infants:  Neurophysiological Correlates Journal of Phonetics (Special Issue of Cross-linguistic speech perception). 39: 527-541. PMID22046059.
 
Shafer, V.L. Martin, B.,. & Schwartz, R.G. (2011). Evidence of deficient central speech processing in children with specific language impairment: the T-complex. (2011). Clinical Neurophysiology. 122(6):1137-55. PMID 21147550.
 
Hisagi, M., Shafer, V.L., Strange, W. & Sussman, E. (2010). Perception of Japanese Vowel Length Contrast contrasts by Japanese and American English listeners: Behavioral and Electrophysiological Measures. Brain Research. 1360: 89-105 PMID: 20816759.

Shafer, V.L., & Sussman, E. (2010). Predicting the future: ERP markers of language risk in infancy. Clinical Neurophysiology. PMID: 20674485

Shafer, V.L , Yu Y. & Datta H. (2010). Maturation of electrophysiological mismatch responses (MMR) to vowel contrasts in four to seven year-old children. Ear and Hearing. PMID: 20562625

Shafer, V.L & Garrido-Nag, K. The neurodevelopmental basis of speech and language. (2007). The Handbook of Language Development (M. Shatz, E. Hoff, Eds.). Oxford: Blackwell.

Shafer, V.L , Morr, M.L., Datta H. Kurtzberg, D & Schwartz, R.G. (2005). Neurophysiological indices of speech processing deficits in children with specific language impairment. Journal of Cognitive Neurosciences 17, 1168-1180.

Strange, W. & Shafer, V.L. (2008). Speech perception in second language learners: the re-education of selective perception. Zampini, M., & Hansen, J. (eds). Phonology and Second Language Acquisition. Cambridge University Press.

Shafer, V.L., Schwartz, R.G., & Kurtzberg, D. (2004). Language-specific memory traces of consonants in the brain. Cognitive Brain Research, 18. 242-254.

Morr, M.L., Shafer, V.L., Kreuzer, J., & Kurtzberg, D. (2002). Maturation of mismatch negativity in infants and pre-school children. Ear and Hearing. 23, 118-136.

Neumann, Y. Obler L.K. , Gomes, H., Shafer, V.L. (In press). Phonological vs. sensory contributions to age effects in naming: An electrophysiological study. Aphasiology.

Datta H., Shafer, V.L , Morr, M.L., Kurtzberg, D & Schwartz, R.G. (2010.) Brain discriminative responses to phonetically-similar long vowels in children with SLI. Journal of Speech Language Hearing Sciences, 53(3), 757-777.

Hisagi, M., Shafer, V.L., Strange, W. & Sussman, E. Perception of Japanese Vowel Length Contrast contrasts by Japanese and American English listeners: Behavioral and Electrophysiological Measures. Revised and resubmitted to Brain Research.

MacRoy-Higgins, M. Shafer, V.L., Marton, K. Schwartz, R.G. (The influence of phonotactic probability on early word perception). Submitted to The Journal of Child Language.

Collaborators

Elyse Sussman, Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine.

April Benasich, Center for Molecular & Behavioral Neuroscience
Rutgers, The State University of New Jersey

Topics (jump to):

Electrophysiology and Event-Related Potentials (ERPs)
Specific Language Impairement (SLI)
Children with Autism Spectrum Disorder (ASDs)
Language Development in Infants
Info for Research Participants


Electrophysiology and Event-Related Potentials (ERPs)

The brain consists of billions of neurons and connections (axons and synapses) between these neurons. Neurons communicate with each other via electrochemical events.

Electrical changes caused by the activity of the neurons can be recorded at the scalp using electrodes. The signals are very small (on the order of 1 to 100 millionths of a volt) and must be amplified. Specialized amplifiers and computer software has been designed to record this electrical activity, which is called the electroencephalogram, of EEG.

The EEG has been used to examine brain activity in infants, children and adults for over 30 years. It is a safe, non-invasive method.

In our laboratory, a net of 65 electrodes is placed on the scalp. Each electrode is made of tin and makes contact with the scalp via a sponge soaked in a solution of water and salt.

During a study the infant or child listens to, and/or watches some stimulus (e.g., speech sounds, or pictures of animals). These stimuli are called events, and the electrical activity associated with these events are called ERPs.

Electrical activity from the brain is recorded to a number of these events. Consistent electrical changes to a category of events can be seen by averaging together the electrical activity from 15 or more of these events.

This activity is seen as a series of positive and negative-going deflections, and shown in figure 1 below in section 3.

The timing and size of these deflections, and the location of this activity on the scalp is used to make inferences about the time-course of processing in the brain and the location of the source of this activity.


Specific Language Impairment (SLI)

SLI is a developmental language disorder in the absence of frank neurological, sensori-motor, non-verbal cognitive or social emotional deficits (see Watkins, 1994).

Children with SLI lag behind their peers in language production and language comprehension, which contributes to learning and reading disabilities in school.

One of the hallmarks of SLI is a delay or deficit in the use of function morphemes (e.g., the, a, is) and other grammatical morphology (e.g., plural -s, past tense -ed). They omit function morphemes from their speech long after age-matched children with typical language development show consistent production of these elements.

Some researchers claim that SLI children's difficulty with grammatical morphology is due to delays or difficulty in acquiring a specific underlying linguistic mechanism. For example, Mabel Rice and Ken Wexler suggest that children with SLI have difficulty acquiring the rule that verbs must be marked for tense and number ("he walks", not "he walk"; Rice, 1994).

A second hypothesis is that these children have a deficit in processing brief and/or rapidly- changing auditory information, and/or in remembering the temporal order of auditory information. For example, Paula Tallal has found that some children with SLI have difficulty reported the order of two sounds when these sounds are brief in duration and presented rapidly (Tallal, et al., 1985). Laurence Leonard suggests that these deficit may underlie difficulties in perceiving grammatical forms (e.g., "the", "is"), which are generally brief in duration (Leonard et al., 1997).

A third hypothesis is that children have poor short-term memory for speech sounds (e.g., Gathercole, 1998). Children with SLI perform worse than children with typical language skills on repeating nonsense words (for example, "zapanthakis"). In a number of recent studies short-term memory for speech sounds has been shown to correlate highly with vocabulary acquisition and speech production . This has led to the hypothesis that a primary function of this memory is to facilitate language learning.


Children with Autism Spectrum Disorders (ASDs)

The Developmental Neurolinguistics Lab is currently working on research to assess the information processing skills of a subset of children with Autism Spectrum Disorders (ASDs) who have no functional expressive language and do not demonstrate evidence of understanding oral language by traditional means of responding by looking, referencing, pointing or following directions. This group is often characterized as "low functioning" because their level of performance cannot be determined by using standard assessments of cognition and language. Therefore, their cognitive level and capacity for learning have not been comprehensively assessed. Passive electrophysiological tasks will be run and findings will allow us to identify and begin to quantify linguistic and cognitive capabilities of this difficult to assess group of children with autism. (Acknowledgement: this research is supported by Autism Speaks) Information about the definition and symptoms of Autism Spectrum Disorders (ASDs) Autism Spectrum Disorders (ASDs) are a group of complex neuro-developmental disabilities characterized by communication deficits, impaired social interaction, and unusual behaviors or interests (National Institute of Child Health & Human Development, 2008; Bakare & Ikegwuonu, 2008). ASDs typically are first observed in early childhood and affect approximately 1 in every 150 children born in the U.S (Center for Disease Control and Prevention, 2009;).

ASDs are "spectrum" disorders, meaning that there is a range or "spectrum" of impairments that have varying degrees of severity from person to person. The three main types of ASD are: Autism Disorder, Asperger Syndrome, and Pervasive Developmental Disorder – not otherwise specified.


Language Development in Infants

Perception and Comprehension

Infants are active learners of language from before birth. While still in the womb, the infant becomes familiar with the melody and rhythm of the mother's voice.

At birth, infants can discriminate speech sounds found in any of the world's languages.

By 6 months of age, babies show that they are learning which vowel sounds are important in the language(s) used by their caretakers (for example "e" of "bed" in English, or "oi" in "moi" in French).

Babies begin to comprehend their first words around 9 months of age.

By 10 months of age, infants show that they are learning which consonant sounds are important in the language(s) used by their caretakers (for example "th" in English, or trilled "r" in Spanish). They also learn which sequences of sounds are allowed (for example, "str" in English, and "vl" in Dutch).

By 12 months of age, babies are becoming familiar with basic patterns of grammar, such as the word order of the language.

Babbling and Word Production

Newborn babies cannot make adult-like consonant and vowel sounds because their breathing mechanism is higher in the throat so that they can't choke (It is set-up like a chimpanzee!). The noises you hear a newborn make are primarily associated with feeding (for example, burping and sucking), and crying.

Around 2 months of age, the wind pipe (trachea) drops. The sounds infants make are often called cooing or gooing. Most of these sounds are somewhat resonant (like singing). Squeals, giggles, and shrieks are gradually added up to 4 months of age.

At 4 the infant begins making sounds that are like simple adult syllables (for example [ba] or [ga].

By 6 months the infant has begun reduplicating these sylables ([baba], [didi]).

Around 8 months of age the infant begins producing a variety of syllable types (for example ([badi] [goki]), with lots of intonational variation.

Between 10 and 12 months of age, infants will often produce their first words. These words are usually unlike the adult form (for example, [ba] for bottle). We call them words because the infant consistently produces the form in appropriate contexts (asking for the bottle, pointing at the bottle, dropping the bottle).

More adult-like word forms develop between 2 and 3 years of age.

Brain Development

At birth the infant brain is still quite immature. This immaturity appears to be important for learning how to cope with the surrounding world. Some brain areas (for example those dealing with vision or audition), need exposure to the appropriate information (for example, environmental sounds and speech sounds for audition) to develop properly. This is why it is so important to get your child's hearing screened during infancy. If you notice any apparent problems with your child's hearing, then contact (add ASHA link). Talking to your child, playing music, and allowing them to listen to the typical sounds of your environment will allow proper development of the auditory regions.

The sensory brain regions develop earlier than regions involved in higher level cognitive processing (for example, memory regions in the frontal brain regions). Thus, what a child can learn at a particular age is constrained by brain development. You will not be able to teach your baby difficult memory tasks when they are infants because their brains are not yet ready.


Info for Research Participants

(jump to):

Current Studies
What's it like to be in a study?
What is it like for your child to be in a study?
Dear Parents
Confidentiality and Anonymity
Directions to the Developmental Neurolinguistics Lab

Current Studies:
  1. Neurodevelopmental Basis of Speech Perception. We invite monolingual (English) and bilingual (Spanish-English) parents with children ages 3 months to 10 years old, as well as bilingual English & Spanish-speaking adults (ages 21 – 40) who learned English either before age 5 or after age 18, to participate. Call Christina Padron at (212) 817-8833 or DevNeuroLab@gc.cuny.edu.

  2. The Effects of Attention on the Mismatch Response in Infants. We invite parents and infants 4 – 10 months old, to participate. Call Karen Garrido at (212) 817-8858 or Christina Padron at (212) 817-8833 or DevNeuroLab@gc.cuny.edu.

  3. ERP Processing of Speech Processing in Bilinguals. We invite bilingual speakers of English and Spanish (18-40 years old). Contact Karen Garrido at (212) 817-8858 or ksgarrido@aol.com, or Miwako Hisagi at 212-817-8820 or mhisagi@gc.cuny.edu.

  4. Intensity Study. We invite adults 19 - 40 years old and parents with infants 6-12 months old of any language background to participate. Call DevNeuroLab@gc.cuny.edu.

  5. Miwako's Dissertation.

  6. Yael & Arild's study.

*WE OFFER MONETARY COMPENSATION FOR YOUR TIME

What's it like to be in a study?

The initial 10 to 15 minutes will be devoted to fitting the electrode net onto the head of the participant. An additional 5- to 15-minutes will be devoted to ensuring that the participant is comfortably seated, understands of the purpose of the electrode cap, and is maximally comfortable while wearing the electrode net. Adult participants will be seated upright in a soft, vinyl chair in a sound and electrically-shielded room in the Developmental Neurolinguistic Lab at CUNY Graduate Center. Child participants will sit upright in a child's chair. Infants will sit on their guardian's lap, or in a highchair.

The electrode net hangs over the chair from a height-adjusting, wall-mounted bracket. This bracket, rather than the participant's head, bears the weight of the electrode net.

Prior to fitting the net, the experimenter(s) will provide a description of the purpose of the net, what the participant should expect to feel when the cap is positioned, and a demonstration of the technique used to position the electrode net.

When the participant is seated, the electrode net is adjusted on the head of the participant so that the electrodes make contact with the scalp. The end of each electrode is fitted with a soft sponge that has been soaked in a sterile saline solution.

The purpose of the saline solution is to improve the contact and electrical conductivity between the scalp and each electrode. The purpose of the sponges is to hold the saline solution, and make the cap more comfortable to wear.

The electrode net consists of 65 small electrodes that, when in place, rest lightly at positions around the sides, top, and back of the head. The adult nets and nets for the older children also have two electrodes that rest against the cheeks of a subject, about 2 cm below the center of the eyes. These electrodes pick of eye movements.

Participants may experience a light, non-painful tactile sensation at each electrode point, along with the sensation of cold and/or wet at each point as a result of the damp sponges. Scalp electrodes pose no threat of personal injury.

They are safe clinical and research tools that have been used with the adult population for over 50 years and with the infant population for at least 30 years.

The design of the electrode net used in this study is an improvement over previously used electrode caps and scalp electrodes, because it does not required rubbing the scalp to improve contact.

After the electrode net has been fitted, the participant will be asked whether the cap is comfortable or whether adjustments should be made. As noted above, 5- to 15- minutes will be devoted to ensuring that the participant is comfortable wearing the electrode cap after it has been positioned.

After the fitting is complete, the older participants will receive verbal instructions about the listening experiment.

Only adults and children older than 4-years of age will be asked to respond to a target. The target is used to determine whether the participant is listening to the stimuli. The older participants are asked to fixate his/her eyes on a position directly ahead to minimize eye movements.

The participant is free to stop the experiment at any time without prejudice or negative consequences. Young children and infants will be observed by the experimenters and child's caregiver to determine whether the infant/child is experiencing discomfort.

The session will be stopped immediately if the participant so requests, or if the experimenter(s)/caregiver observe that the participant is demonstrating signs of discomfort or non-compliance.

The goal is to make the experience as comfortable as possible for the participant, so as to ensure their health and safety, as well as to ensure that the results of their participation are of the highest quality.

After the instructions are provided, test tones and recorded utterances, and any visual pictures will be presented to the participant for familiarization with the task and to ensure comfortable loudness settings of the output apparatus.

Participants will listen to the verbal stimuli over speakers positioned several feet to the side and front of the chair in which they are seated. The visual stimuli will be presented on a video screen one meter in front of the participant, at eye level.

The verbal stimuli in each listening task will be segmented into three- to five-minute units. A break lasting several minutes will be taken between segments. During presentation of the verbal stimuli, ERP measurements will be taken. The listening task for each experiment will take approximately 25 - 30 minutes.

Results will be saved to disk, labeled, and stored for analysis.

After the session, participants will have access to facilities to wash, dry, and style their hair. Participation of parents/caregivers.

The parent/caregiver of infants, up to 3 years of age, will be asked to fill out the MacArthur questionnaire. These forms will be sent home with the parent, along with a stamped addressed envelop.

The parent will be asked to fill the questionnaire out within the next week and return it to the lab. The questionnaire takes less than 30 minutes to complete.

What is it like for your child to be in a study?

Step 1 Lab Welcome

  1. One of our helpers will describe the study to you after you arrive.

  2. The researchers will play with your child. This will make your child feel more comfortable with the researcher.

  3. The research methods we use are comfortable for you and your child and have no known potential for harming your child either physically or mentally. They are safe methods that have been used with adults for over 50 years and with infants and children for over 30 years.

Step 2 Putting on the Sensors

  1. We will measure your child's head using a paper tape measure.

  2. A net of sensors that fits your child's head will be soaked in water mixed with a special salt and a drop of baby shampoo. The soft sponges on the net soak up this water solution to improve conductivity and also make the net more comfortable to wear.

  3. The net is patted dry with paper towel to prevent water from dripping into your child's eyes when putting the net on.

  4. One researcher will gently ease the net onto your child's head. A second researcher will entertain your child with toys.

  5. The researcher will check the sensors to make sure they are in the correct place.

Step 3 Listening to Sounds

  1. Your child will watch a video or DVD with the sound turned off.

  2. Sounds are played over speakers in the room.

  3. You will wear headphones and listen to a CD of your choice so you can't hear the sounds. The sounds are simple speech sounds like "bip" or "ba."

Step 4 Clean-up

  1. The net will be removed gently from your child's head.

  2. Your child 's hair will be a little bit wet from the salt solution that the net was dipped in.

  3. Simply rinsing or washing their hair when you get home will take out the solution.

  4. There are no known allergic reactions to the salt solution.

Step 5 What to Bring to the Lab

  1. Bring snacks that you know your child likes. We have cheerios, pretzels, crackers and juice.

  2. Bring your child's favorite video or DVD to watch during the study. You can also choose from a selection of videos at our lab if you choose not to bring one.

  3. Clean hair (no hair spray or gel) works best.


Dear Parents,

We are looking for parents and children who are interested in taking part in studies of child language learning of English and other languages (for example, Spanish).  We study children from birth to ten years of age. If you are interested in finding out more about our studies or would like to take part, then contact us.
The goal of our lab, The Developmental Neurolinguistics Laboratory, is to understand how language is learned from infancy through adulthood. We have several ongoing studies using different methods to look at language learning.
Some of the studies in our lab include:

How Does Attention Affect Learning of Speech Sounds in Infants?
This study looks at what infants pay attention to when speech sounds and visual images are played to them.  Knowing what grabs their attention will help us understand why some children, who seem less interested in speech sounds, have difficulty learning language. This study may also help us identify which infants are at risk for certain disorders like autism.

How Does the Brain Respond to Speech In infants and Children?
The ability to tell words apart that sound similar (for example "bear" from "pear") is necessary to learn language.  We are studying how infants and children differentiate speech sounds (for example "b" versus "p") and  words ("lock" and "luck").  We also want to see how learning two languages (for example Spanish and English) affects children's ability to tell words apart.  For example, the vowel difference between "lock" and "luck" is difficult for Spanish speakers to hear because these vowels are not used in Spanish.  These studies may help us identify infants and children who are at risk for delayed language acquisition and will help us understand bilingual language learning.

What Do Mothers Say to Their Infants?
A parent's language is quite important for the baby learning to speak. Our studies aim to look more closely at what parent's say to babies to learn more about how this affects the baby's language. The mother and baby are videotaped in the home for about 30-40 minutes by one of the researchers in our lab.   These studies may help us to develop better ways to teach a child to learn a second language.

How Do We Answer These Questions?
We use several methods in our lab to answer these questions. In one method, we can use what an infant looks at to tell us that they understand a word. For example, we can say the word "bear" while we show a picture of a bear on one screen and a picture of a pear on another screen, and see if the child looks longer at the bear or the pear picture.   These studies can help us understand how children who are not yet speaking learn and hear words.

In another method, called electrophysiology, we look at the brainwaves of the child. This brainwave activity occurs naturally in everyone.  Electrophysiology uses the electroencephalogram or EEG.  The EEG is used as a safe clinical test in hospitals. We use specially-designed sensors wrapped in soft sponges to look at the child's brainwaves in response to some stimulus (for example, a speech sound).  The net (which looks like a hair net) is fast to put on and comfortable to wear. Our sensors are designed to pick up the brain activity.  This activity is sent to an amplifier (like an amplifier on a stereo system) and then into a computer where we can study it.   There are no known adverse reactions to an EEG.

Our lab's long-term goal is to be able to use the findings of our studies to understand more about language development in infants and children who have language disorders. Children with Specific Language Impairment, Autism, and Attention Deficit Disorder are just a few who experience difficulty in using language. If we can discover how infants learn language, how they hear speech sounds, and what other abilities affect learning, then we might be able to identify and treat children with language disorders early on.

We encourage you to bring your child into our lab to take part in a study. We are looking for infants and children between the ages of birth and 10 years of age with or without language impairment. You will be paid for your time.

If you have any further questions, are interested in participating in a study, or would like to come visit our lab to see the set up and meet us, please contact us.

We are located at:
The Graduate School and University Center-CUNY
365 Fifth Avenue
New York, NY  10016
Seventh Floor, Room 7392


Our telephone numbers are: (212) 817-8833 or (212) 817-8858
You can email us at: DevNeuroLab@gc.cuny.edu 

Sincerely,

Valerie L. Shafer, PhD
Associate Professor, Program in Speech and Hearing Sciences
Director, Developmental Neurolinguistics Laboratory


Confidentiality and Anonymity

Several safeguards will be put into place to ensure the confidentiality and anonymity of each study participant.

For example, pseudonyms will be used on all documents and labels, and in all experimenter discussions to protect the identity of each participant. In the event that aspects of this research are published or presented publicly, pseudonyms will also be used. Only those individuals named as the principal investigator and co-investigators on the application form will have access to data and documentation prior to, during, and after the study.

Correspondence between the study participants and experimenter(s) will be direct, so as to avoid contact with other individuals within the Graduate Center, including secretarial staff, who might later be able to identify the participants.

After completion of the study, all data and documents identifying the participant will be stored in a locked cabinet to which only the principal investigator will have immediate access.

Additional copies of data, forms, or other documents will be shredded or otherwise destroyed with any identifying information masked with dark ink.


Directions to the Developmental Neurolinguistics Lab

We are located at The Developmental Neurolinguistics Lab (Room 7392), The Graduate Center, CUNY.

The Graduate Center (housed in the historic B. Altman building) is located at 365 Fifth Avenue, between 34th and 35th Streets, diagonal to the Empire State Building.

 

Travel within NEW YORK CITY:
The closest SUBWAY station, located at 34th Street and Avenue of the Americas (6th Ave), is served by the B, D, F, V, N, R, W and Q trains.

PENN Station, with an entrance at 7th Avenue, is served by the 1,2,3, and 9 IRT trains. One block west, the 8th Avenue station is served by the A, C, and E lines.

Buses: Take the M34 to 34th Street Herald Square and walk east to 5th Avenue. Take the M2, M3, M4, M5, Q12, x23, or x24 to 34th or 35th street and 5th avenue.

Travel from NEW JERSEY by Public Transportation:
You can take the NJ Transit or PATH train to the NY Penn Station and walk 3 avenues east to 5th Avenue and 2 streets north to 34th Street.

Parking: The closest parking garages are on 34th, and 35th St. between 5th and 6th Avenues. If driving, keep in mind that you cannot turn left from 35th and 5th Ave. We will be glad to give you specific directions if you ask us over the phone.

When you enter 365 Fifth Avenue, sign in at the book by the guard's area and take the elevator to the 7th floor.  Follow signs to the Speech and Hearing Sciences main office and ask for Dr. Valerie Shafer.

Neurolinguistics Laboratory

In this Laboratory work is conducted on bilingualism and bidialectalism in aphasia, morphological disorders in agrammatic aphasia across languages, processes involves in reading in normals and dyslexics, and language changes associated with healthy aging and dementia (e.g., the ability to comprehend accented speech).  During Spring 2009, Neurolinguistics Laboratory meetings are held on Wednesdays from 11:30-1:30; in Fall, they will be held Thursday afternoons from 2-4. They are open to all. 

Lab Meetings: Neurolinguistics Lab meetings are held weekly. 
Contact loraine.obler@gmail.com to find out the current times.

Mission Statement:

The Neurolinguistics Laboratory has as its goal to understand the organization and processing of language in the adult brain. Particular foci of interest are:

  1. the language changes associated with aphasia and it's treatment

  2. the way agrammatism manifests differently, and similarly, across languages

  3. the language changes associated with healthy aging and dementia

  4. he way languages are organized in, and utilized by, the brain of the bilingual or polyglot

  5. how dyslexics (monoligual or bilingual, young or old) succeed in learning a 'foreign' language


Current Research:

Ongoing research in the Neurolinguistics Lab includes projects on, brain and cognitive resources available for language performance in older adults, lexical retrieval in discourse in older adults, language reserve as distinct from cognitive reserve, and sentence comprehension in older adults (as it links to brain regions available, underlying executive functions, different syntactic structures, eye-tracking). 

Research with colleagues in other labs includes some of these and other topics in conjunction with the NIH-funded Language in the Aging Brain Lab at the Boston VA Healthcare System where Prof. Obler is co-PI with Martin Albert.


Principal Investigator:

Loraine K. Obler, Ph.D.
Distinguished Professor
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309

Phone: 212-817-8809
Fax: 212-817-1537
email: loraine.obler@gmail.com

Faculty  
Loraine K. Obler photo

Loraine K. Obler, Ph.D.

In addition to her position as Distinguished Professor in Speech-Language-Hearing Sciences, Loraine K. Obler has a joint appointment in the Linguistics Program. As well, she and Martin Albert were co-PIs of the NIH-funded Language in the Aging Brain Laboratory of the Boston University School of Medicine Harold Goodglass Aphasia Research Center at the VA Boston Healthcare Center. Her research articles reflect her interests in such topics as the language changes associated with healthy aging and Alzheimer's disease, neurolinguistic study of bilingualism, cross-language study of agrammatism, and neuropsychology of talent as it relates to dyslexia and individual differences in second-language acquisition. The books she has co-authored or co-edited include Aspects of Multilingual Aphasia (with M. Gitterman and M. Goral, Multilingual Matters, 2012),Communication Disorders in Spanish Speakers: Theoretical Research and Clinical Aspects (with J. Centeno and R. Anderson, Multilingual Matters, 2007), Language and the Brain (with K. Gjerlow, Cambridge University Press, 1999, currently under revision), Language and Communication in the Elderly (with M.L. Albert, D.C. Heath and Co., 1980), Neurobehavior of Language and Cognition: Studies of Normal Aging and Brain Damage (with L. Connor, Kluwer Academic Publishers, 2000), and The Bilingual Brain: Neuropsychological and Neurolinguistic Aspects of Bilingualism (with M.L. Albert, Academic Press, 1978).

Dr. Obler CV

Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Amy Vogel-Eyny
She is currently a level 3 doctoral student currently working on her dissertation that examines the contribution of frontal and temporal brain areas to lexical processing, such as unique entity retrieval and tip-of-the-tongue states, in healthy older adults through the use of transcranial direct current stimulation. Her work investigates language production changes in healthy aging as well as the contribution of cognitive processes to maintaining or even improving language abilities. She is additionally interested in further understanding neural plastic changes in the aging brain, particularly for language tasks, and the extent to which such changes are evidence of compensatory or deficient processing.  An important aspect of her research is that her findings have functional relevance to the populations she examines.
Headshot for the Speech-Language-Hearing Sciences program
Jungna Kim
Jungna is currently a Ph.D. candidate in the Neurolinguistics lab in the Speech-Language-Hearing Sciences program with an Enhanced Chancellor's Fellowship. Her research interests mainly lie in the relationship between cognitive controls (e.g., working memory, interference control, and updating) and bilingual auditory discourse (text) processing. She is also actively participating in several research projects with various collaborators. These include a) a project where two language comprehension tests are reviewed for reliability and validity, b) a collaborative research on the effectiveness of Intensive Comprehensive Aphasia Program (ICAP), c) a collaborative research on sentence processing in older adults using an eye-tracking method, d) a collaborative research project on language intervention for patients with concussion using tDCS, e) a project on L2 acquisition in aging population, and f) a graduate-student research on multilingual aphasia. She received her Bachelor's degree in French and English linguistics and a Master's degree in Applied Linguistics.
Headshot for the Speech-Language-Hearing Sciences program
Iris Strangmann
I'm a PhD candidate in the Speech-Language-Hearing Sciences program at the CUNY Graduate Center. I have a bachelor’s degree in Communication and Information Sciences and a Master's degree in Linguistics, both from the University of Groningen, The Netherlands. I'm interested in how bilinguals control their languages, specifically how language use and exposure affect the bilingual language control mechanism. My dissertation examines language control via code-switching using electrophysiological measures. In addition to my own research, I'm involved in a project examining the use of cognates in Norwegian-English bilinguals diagnosed with dementia.
Taryn Malcolm
Taryn Malcolm
Taryn Malcolm is currently a doctoral student and member of Loraine Obler’s Neurolinguistics lab in the Speech-Language-Hearing Sciences Department. She received her Master of Arts Degree in Speech-Language Pathology from St. John’s University.  She has practiced as a speech-language pathologist in acute and sub-acute rehabilitation with pediatric, adult, and geriatric populations, with a focus on neurogenic disorders, respiratory/voice disorders, and dysphagia. Her main areas of research include bilingualism, aphasia, bilingual aphasia, and neurological processes underlying acquired language disorders. She is currently working on a research project investigating cross-linguistic influence in speakers of Jamaican Creole following immersion in the environment of their second language, as well as an fMRI study investigating language comprehension in noise.
Headshot for the Speech-Language-Hearing Sciences program
Aviva Polus
I am currently a level 3 doctoral student in the neurolinguistics lab. I have a B.A. in speech and language pathology, and an M.A. in bilingualism and biculturalism, from Hadassah Academic College, Jerusalem. I have worked as an SLP for almost 10 years in adult rehabilitation in Israel. My interests include bilingualism, aging and aphasia and I am currently working on my doctoral research proposal on treatment of bilingual aphasia in the SLP clinic.
Headshot for the Speech-Language-Hearing Sciences program
Marta Korytkowska
Marta Korytkowska M.S. CCC-SLP is a Speech Language Pathologist with a focus on adult speech, language and swallowing disorders. She is a second level doctoral student in the Neurolinguistics lab. Her first examination project focused on executive functions and bilingual advantage in heritage language bilinguals. Marta’s current research interests and projects are related to lexical retrieval in healthy adults and those with non-fluent aphasia, as well as the interaction of cognition with the recovery process. Her primary focus is on the treatment research in acquired aphasia and primary progressive aphasia.
Headshot for the Speech-Language-Hearing Sciences program
Stanley Chen
Stanley Chen is a level 2 doctoral student in the lab. He received his dual BA from National Chengchi University, focusing on linguistics in English and Historical Chinese and an MSEd from the University of Pennsylvania in TESOL. Recently, he completed his first exam proposal, examining the effects of punctuation and prosody on ambiguity resolution in Mandarin Chinese. His main interests include bilingualism, electrophysiology, and sentence processing.
Headshot for the Speech-Language-Hearing Sciences program

Zahra Hejazi

Zahra received her Bachelor’s and Master’s degrees in speech and language pathology from Tehran University of Medical Sciences and St. John’s University. She is currently pursuing her Ph.D. in Speech-Language-Hearing sciences at the CUNY Graduate Center. Her interests include aphasia, bilingualism, and healthy aging.

Headshot for the Speech-Language-Hearing Sciences program

Katarina Antolovic

Katarina Antolovic is a third year Ph.D. student in the Neurolinguistics Lab. She graduated with a B.A. in Linguistics and a B.S. in Communication Sciences & Disorders – Speech-Language Pathology from the University of Texas at Austin. Her previous research has investigated the effects of language brokering on patterns of language use, how cognates are used by multilinguals with aphasia and dementia, and the psycholinguistic nature of cognates. Her current interests include semantic control, lexical access, and semantic organization of the mental lexicon in older adults and bilinguals. Moreover, she is interested in how cognitive changes associated with aging interact with sources of interference during speech production and processing (e.g., cross-linguistic interference, semantic interference). Her research aims to identify semantic interference effects in healthy older adults as a stepping stone to determining how semantic interference may be used as a clinical marker for dementia.

Headshot for the Speech-Language-Hearing Sciences program
Lauren Grebe
Lauren Grebe is a level 2 doctoral student in Dr. Obler’s Neurolinguistics lab. She received her Bachelor of Science and Master of Arts in Speech-Language Pathology from Molloy College and St. John’s University, respectively. She has practiced as a speech language pathologist in acute and sub-acute rehabilitation with adult and geriatric populations. Her research interests include aphasia, dementia, and primary progressive aphasia. She is currently working on a research project examining the influence of various demographic variables on the severity and manifestation of primary progressive aphasia.
Headshot for the Speech-Language-Hearing Sciences program
Gerald Imaezue
I am a doctoral student in the Speech-Language-Hearing Sciences program and a member of the Neurolinguistics Lab. I am also an SLP with experience in adult rehabilitation. I received my Bachelor’s degree in special education (speech pathology and audiology specialty) and political science, and my Master’s degree in speech pathology and audiology from the University of Ibadan, Nigeria. My previous research focused on treatment efficacy of vocal fatigue in teachers, developing the integrated systems hypothesis based partly on the neural multifunctionality concept, bilingualism in aphasia and  social healthcare research towards promoting access to aphasia rehabilitation in Nigeria. My research interests are aphasia, neural multifunctionality, neuroplasticity and language in aging.
Headshot for the Speech-Language-Hearing Sciences program
Kyung Eun Lee
Kyung Eun Lee is a first-year doctoral student in Neurolinguistics lab in Speech-Language-Hearing Sciences program. She received her Master of Education in English Education from Yonsei Graduate School of Education in Korea and completed her Post-Master’s program in TESOL at New York University. She is currently involved in L2/Aging project in this lab. Her research interests broadly lie in bilingualism and L2 Acquisition.

Visiting Scholars in residence at GSUC Neurolinguistics Lab

 

2018-2018 Teresa Signorelli Pisano, Marymount Manhattan College
2018-2019 Lourdes Ortega, Georgetown University
2018-2019 Yael Neumann, Queens College CUNY
2013, 2016 Zohar Eviatar, Haifa University (invited)
2012, 2016 Alexandre Nikolaev (post-doc, U. of Joensuu)
2011-12 Jet Vonk, University of Groningen
2010-11 Carmit Altman, Bar Ilan University, Mina Hwang, Dankook University
2009 Veronica Morena, University of Valencia
2006 Anat Stavans, Hebrew University and Beit Beryl College, Israel.
2005 – 2008 Seija Pekkala, Helskini University, Finland
2004 Alessandra Riccardi, Universita per Stranieri, Perugia, Italy; Ruth Berman, Tel Aviv University, Israel; Anne Aimola Davies, Australian National University
2003 Jessica Cancila, Universita per Stranieri, Perugia, Italy
2001 Prathibha Karanth, Shetty Institute, Mangalore, India
1987 Pirkko Kukkonen, Helsinki University, Finland

Graduates their dissertations and current affiliations

Higby, E. (2016) “Native language adaptation to novel verb argument structures by Spanish-English bilinguals: An electrophysiological investigation”

Paplikar, A. (2016) “Language age-mixing in discourse in bilingual individuals with non-fluent aphasia”

Ashaie, S. A. (2016) “Modulating the semantic system: The role of bilateral anterior temporal lobes in confrontation naming- A combined tDCS and eye-tracking study”

Hyun, J. (2016) Hyun, J. (2016) "The Relationship Between Lexical Performance and Regional Gray Matter Volumes: A Longitudinal Study of Cognitively Healthy Elderly"

Park, Y. (2015) “Roles of shifting attention, alternating attention and inhibition on temporary syntactic ambiguity resolution and use of context in younger and older adults”

Conner, P. (2013) “Novel spoken word learning in adults with developmental dyslexia”

O’Connor Wells, B. (2011) “Frequency, Form-Regularity and Semantic Effects in Agrammatism:  Evidence from Spanish Ser and Estar"

Datta, H. (2010) “Brain Bases of First Language Attrition in Bengali-English Speakers”, Molloy College

Anema, I. (2008) “The Relationship between Fluency-based Suprasegmentals and Comprehension in Oral and Silent Reading in Dutch Speakers of English”, SUNY New Paltz

Signorelli, T. (2008) “Working Memory in Simultaneous Interpreters”, Marymount Manhattan College

Ijalba, E. (2007) “Markers of Dyslexia in Spanish-Speakers who Report Severe Difficulties Learning English”, Queens College, CUNY

Neumann, Y. (2007) “An Electrophysiological Investigation of the Effects of Age on the Time Course of Segmental and Syllabic Encoding during Implicit Picture Naming in Healthy Younger and Older Adults”, Queens College, CUNY

Galletta, E. (2003) “Recognition of Accented English in Advancing Age”, Hunter college, CUNY

Mathews, P. (2003) “Derivational Morphology in Agrammatic Aphasia:  A Reading-aloud Study”

Schmidt, B. (2003) “The Relation between Oral Reading and Silent Reading Comprehension Skill”, Molloy College, Chair

Haravon, A. (2002) “Grounding Communication Between Deaf and Hearing People: Technological Advances”

Jones, J. (2002) “Agrammatism in a Bidialectal Speaker of AAVE and SAE”

Goral, M. (2001) “Lexical Access and Language Proficiency of Trilingual Speakers”, Lehman College, CUNY

Wiener, D. (2000) “Mechanisms of Inhibition in Wernicke’s Aphasia”

Meth, M. (1998) "The Influence of Verb Stem Features on Inflected Word Production in Patients with Agrammatic Aphasia"

Chobor, K. (1996) "Processing of Lexical Ambiguity by Brain Damaged Patients"

Centeno, J. (1996) "Use of Verb Inflections in the Oral Expression of Agrammatic Spanish speaking Aphasics", St. John's University, Chair

Eng Huie, N. (1994) "Dissolution of Lexical Tone in Chinese Speaking Aphasics", Hunter College, CUNY

De Santi, S. (1992) "Automatic Speech in Alzheimer's Dementia" , General Electric

Johnson, K. (1991) "Metalinguistic Abilities in Literate Adults"

Domingo, R. A. (1991) "The Influence of Setting and Interlocutor on the Ability of Adult Retarded Speakers to Exhibit Control in an Instructional Context"

Mahecha, N.R. (1990) "The Perception of Code Switching Cues by Spanish English Bilinguals"

Bloom, R. (1990) "Dissolution of Discourse in Patients with Unilateral Brain Damage", Hofstra College, Acting Dean

Ehrlich, J. (1989) "Influence of Structure on the Content of Oral Narrative in Adults with Dementia of Alzheimer's Type"

Humes Bartlo, M. (1988) "Neuropsychological Substrates of Success and Failure in Childhood Second Language Learning"

Selected Publications Involving Lab Students (1994-2012)

Marton, K., Goral, M., Campanelli, L., Yoon, J., & Obler, L. K. (2016). Executive control mechanisms in bilingualism: Beyond speed of processing. Bilingualism: Language and Cognition,1-19.
 
Cahana-Amitay, D., Spiro, A., Higby, E., Ojo, E., Sayers, J., Oveis, A., Duncan, S., Goral, M.,
Hyun, J., Albert, M., and Obler, L.K. (2016). How older adults use cognition in sentence-final
word recognition, Aging, Neuropsychology and Cognition, 16, 1-27
 
Yoon, J., Goral, M., Marton, K., Campanelli, L., Eichorn, N., and Obler, L.K. (2015). The effect
of plausibility on sentence comprehension among older adults and its relation to cognitive
functions, Experimental Aging Research, 41 (2), 272-302.


Vonk, J., Jonkers, R., and Obler, L.K. (2015). Semantic subcategories of nouns and verbs: A
neurolinguistics review on healthy adults and patients with Alzheimer’s disease. In Astesano C. and Jucla, M. (Eds.), Neuropsycholinguistic perspectives on language cognition,
East Sussex & NYC: Psychology Press, 61-74
 
Higby, E., and Obler, L.K. (2015). Losing a first language to a second language. In Schweiter, J.
(Ed.), The Cambridge handbook of bilingual processing. Cambridge, UK: Cambridge University
Press, 645-664
 
Ijalba, E. and Obler, L.K. (2015). First language grapheme-phoneme transparency effects, Reading in a Foreign Language, 27, 47-70
 
Cahana-Amitay, D., Spiro, A., Cohen, J., Oveis, A., Ojo, E., Sayers, J., Obler, L.K., and Albert, M.L. (2015). Effects of metabolic syndrome on language functions in aging, Journal of the International Neuropsychological Society, 221, 116-125
 
Hyun, J., Conner, P.C., and Obler, L.K. (2014). Idiom properties influencing idiom production in younger and older adults. In J. Järvikivi, P. Pyykkönen and M. Laine (Eds) Mental Lexicon, 22, 294-315

Higby, E., and Obler, L. K. (2014). The adaptation of native language construal patterns in second language acquisition. In Lintunen, P., Peltola, M. S. and M.-L.Varila (Eds.), Special issue of AFinLA [Finnish Association for Applied Linguistics], Festschrift in Honour of Paivi Pietila, 6, 32-44
 
Ashaie, S., and Obler, L.K. (2014). Effect of age, education, and bilingualism on confrontation naming in older illiterate and low-educated populations, Behavioural Neurology, 2014, 1-10 Article ID 970520, doi:10.1155/2014/970520
 
Pekkala, S., Wiener, D., Himali, J. J., Beiser, A. S., Obler, L. K., Liu, Y., McKee, A., Auerbach, S., Seshadri, S., Wolf, P. A., and Au, R. (2013). Lexical retrieval in discourse: An early indicator of Alzheimer’s dementia. Clinical Linguistics & Phonetics, 27 (12), 905–921

Higby, E., Kim, J., and Obler, L.K. (2013). Multilingualism and the brain. Annual Review of Applied Linguistics, 33, 68-101

Cahana-Amitay, D., Albert, M.L., Ojo, E., Sayers, J., Goral, M., Obler, L.K. and Spiro, A. (2013). Effects of hypertension and diabetes on sentence comprehension in aging, The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 68 (4), 513-521, doi: 10.1093/geronb/gbs085

Neumann-Werth, Y., Levy, E.S., and Obler, L.K. (2013). Hemispheric processing of vocal emblem sounds, Neurocase: The Neural Basis of Cognition, 19 (3), 268-281

Signorelli, T., and Obler, L.K. (2012). Working memory in simultaneous interpreters. In Altarriba J. and Isurin L. (Eds.), Memory, language, and bilingualism: Theoretical and applied approaches, Cambridge, UK: Cambridge University Press, 95-125

Park, Y., and Obler, L.K. (2012). Multilingualism and aphasia. Encyclopedia of applied linguistics

Sebastian, D., Dalvi, U., and Obler, L. K. (2012). Language deficits, recovery patterns, and effective intervention in a multilingual 16 years post-TBI. In Gitterman, M., Goral, M. and Obler L.K. (Eds.), Aspects of multilingual aphasia, Bristol, UK: Multilingual Matters, 122-140

Jones, J., Gitterman, M., and Obler, L. K. (2012) A case study of a bidialectal (African-American vernacular English/Standard American English) speaker with agrammatism. In Gitterman, M. Goral M. and Obler L.K. (Eds.), Aspects of multilingual aphasia, Bristol, UK: Multilingual Matters, 257-274

Fitzpatrick, P., Obler, L.K., Spiro, A., and Connor, L. (2012). Longitudinal study of recovery from aphasia: The case of lexical retrieval. In M. Faust (Ed.), Neuropsychology of Language, 2, 700-719

Obler, L.K. (2012). Conference interpreting as extreme language use. International Journal of Bilingualism, 16 (2), 177-182

Vonk, J. M. J., Jonkers, R., De Santi, S., and Obler, L. K. (2012). Object and action processing in Alzheimer’s disease: The embodied view of cognition. Stem-, Spraak- en Taalpathologie, 17, 103–106

Anema, I., & Obler, L.K., (2012). Hyphens for Disambiguating Phrases: Effectiveness for Younger and Older Adults. Reading and Writing, 1-30

Signorelli, T., Haarmann, H., & Obler, L. K. (2012) Working Memory in Simultaneous Interpreters: Effects of Age, Task and Memory. In T. Signorelli & K. Sieber, (Eds.), special issue of International Journal of Bilingualism, 198-212.

Conner, P.S., Hyun, J., O’Connor, B., Anema, I., Goral, M., Monereau-Merry, M., Rubino, D., Kuckuk, R., & Obler, L.K. (2011). Age-related differences in idiom production in adulthood. Clinical Linguistics and Phonetics, 25, 899-912.

Goral, M., Rosas, J., Conner, P.C., Maul, K., & Obler, L.K. (2011). Effects of language proficiency and language of the environment on aphasia therapy in a multilingual, Journal of Neurolinguistics, 1-14.

Obler, L. K., Albert, M. L., Spiro, III, A., Goral, M., Brady, C. B., Rykhlevskaia, E., Hyun, J., & Schnyer, D. (2011). Chapter 42: Language changes associated with aging. In M. Albert & J. Knoefel (Eds.), Clinical Neurology of Aging. Third Edition, Oxford University Press.

Neumann, Y., Obler, L.K., Gomes, H., & Shafer, V. (2010) Phonological encoding in healthy aging: Electrophysiological evidence. In C. Cairns and E. Raimy, Eds. Handbook of the Syllable, Leiden: Brill, 255-272.

Obler, L. K., Rykhlevskaia, E., Schnyer, D., Clark-Cotton, M. R., Spiro, A. R., Hyun, J., Kim, D-S., Goral, M., & Albert, M. L (2010). Bilateral brain regions associated with naming in older adults. Brain and Language, 113, 113-123.

Neumann, Y, Obler, L.K., Gomes, H. & Shafer, V. (2009). Phonological vs. sensory contributions to age effects in naming: An electrophysiological study, Aphasiology, 12, 1028-1039.

Pekkala, S., Goral, M., Hyun, J., Obler, L.K., Erkinjuntti, T. & Albert, M.L. (2009). Semantic verbal fluency in two contrasting languages, Clinical Linguistics and Phonetics, 23, 431-445.

Obler, L.K., Hyun, J., Conner, P., O’Connor, B., & Anema, I. (2007). Brain organization of language in bilinguals. In A. Ardila and E. Ramos (Eds.). Speech and Language Disorders in Bilinguals, New York: Nova Science International, 21-46.

Ijalba, E., & Obler, L.K. (2007). Reading acquisition in Spanish-speaking learners of English, In Centeno, J. R. Anderson, and L.K. Obler (Eds.). Clinical Communication Studies in Spanish Speakers: From Research to Clinical Practice, Clevedon, England: Multilingual Matters, 243-255

Blumenthal, P., Bradley, P., Britt, T., Cohn, J., Maxfield, N., McCubbin, J., Michael, E., Moore, P., Obler, L.K., & Walsten, T. (2007). Stress effects on bilingual language professionals, International Journal of Bilingualism, 10, 477-495.

Shah, A., Schmidt, B.T., Goral, M. and Obler, L.K. (2005). Age effects in processing bilinguals’ accented speech. In Cohen, J., McAlister, K., Rolstad, K. and MacSwan, J. (Eds.). ISB4: Proceedings of the 4th International Symposium on Bilingualism. Somerville, MA: Cascadilla Press, 2115-2121.

O’Connor, B., Anema, I., Datta, H., Signorelli, T., & Obler, L.K. (Nov. 2005). Agrammatism: A Cross-linguistic clinical perspective, In ASHA Leader.

Wiener, D., Connor, L., & Obler, L.K. (2004). Inhibition and auditory comprehension in Wernicke’s aphasia. Aphasiology, 18.

Ijalba, E., Obler, L.K., & Chengappa, S. (2004). Bilingual aphasia, In T.K. Bhatia and W. C. Ritchie, Eds., The Handbook of Bilingualism, Blackwell Publishing: Malden, MA

Bloom, R., Obler, L.K., De Santi, S., & Ehrlich, J. (Eds.). (1994). Discourse Analysis and Applications: Studies in Adult Clinical Populations. Mahwah, NJ: Erlbaum Press.




First-authored by Neurolinguistics Lab Students (2005-2010)

Conner, P.S., Hyun, J., O'Connor Wells, B., Rubino, D., Anema, I., Monéreau-Merry, M., Goral, M., Kuckuk, R. & L.K. Obler. Age-related differences in idiom production in adulthood. (submitted)

Hyun, J., Ijalba, E., Signorelli, T., Conner, P. & Obler, L.K. (in press). Frontal lobes, in P. C. Hogan (Ed.). Cambridge Encyclopedia of Language Science. University of Connecticut.

Neumann, Y., Datta, H. & Obler, L.K. (in press). Parietal Lobe, in P. C. Hogan (Ed.). Cambridge Encyclopedia of Language Science. University of Connecticut.

O’Connor, B., Anema, I., Rubino, D. & Obler, L.K. (in press). Wernicke’s area, in P. C. Hogan (Ed.). Cambridge Encyclopedia of Language Science. University of Connecticut.

Park, Y., & Obler, L. K. (in press). Multilingualism and Aphasia. In C. A. Chapelle (Ed.), The Encyclopedia of Applied Linguistics. Oxford, England: Blackwell Publishing Ltd.

Selected Abstracts / Recent Presentations

Park, Y., Monéreau-Merry, M., & Obler, L. K (2010). Effect of attitudes toward heritage language on language proficiency in Korean heritage language speakers. 1st International Conference on Heritage/Community Language Education, Los Angeles, CA.

O’Connor Wells, B., Obler, L. K., & Goral, M. (2009). Verb-form regularity and tense-aspect frequency in bilingual Spanish-speakers with agrammatism. The 3rd International Symposium on Communication Disorders in Multilingual Populations, Agros, Cyprus.

Park, Y., Goral, M., Maul, K., & Kempler, D (2009). Differential effect of Constraint-Induced Aphasia Therapy on noun related/ non-noun related verb production. Academy of Aphasia, Boston, MA.

O’Connor Wells, B., Obler, L. K., & Goral, M. (2009). Verb-form regularity and tense-aspect frequency facilitate copula verb production in Spanish agrammatism. Paper presented at the Academy of Aphasia, Boston, MA.

Yoon, J., Hyun, J, Park, Y., Lavi, H., Barrera, M., Wang, W., Rosas, J., Hauser, C., & Obler, L (2009). Color Cue Effect in Learning German Vocabulary. Showcasing Quirky and Unusual Ideas in Development (SQUID), New York, NY.

Lavi, H., Barrera, M., Hyun, J., Park, Y., Rosas, J., Wang, W., Yoon, J., & Obler, L. K. (2009). Color Cues and Learning New Vocabulary: The Case of Non-Native Learners. NYS TESOL Annual Applied Linguistics Conference, Teacher's college, Columbia University, New York, NY.

Hyun, J. & Obler, L.K. (2008). No Dual System for Regular vs. Irregular: Evidence from Honorific Verbs in Korean-speaking Agrammatics. 18th Japanese /Korean Linguistics Conference, Graduate Center of City University of New York, NY.

Szupica-Pyrzanowski, M., Obler, L.K., & Martohardjono, G. (2008). Assessing morphological and phonological contributions to the inflectional deficit in agrammatism. Academy of Aphasia, Turku, Finland.

Hyun, J., Obler, L.K., Lee, Y. (2008). Honorific Production of Korean-speaking agrammatics. Academy of Aphasia, Turku, Finland.

Neumann, Y., Obler, L.K., Shafer. V.L., & Gomes, H. (2008). Phonological Aspects of Naming Impairment in Aging: An Electrophysiological Study. Sixth International Conference on the Mental Lexicon. Banff, Canada.

Datta, H., Shafer. V.L., Goral, M., Obler, L.K. & Gitterman, M. (2008). L1 lexical attrition: L2 interferece or L1 disuse? Sixth International Conference on the Mental Lexicon, Banff, Canada.

Conner, P.S., Goral, M., Hyun, J., Rubino, D., Obler, L.K. (2008). Idioms as lexical items: evidence from age-related strategies for production. Sixth International Conference on the Mental Lexicon. Banff, Canada.

Neumann, Y., Obler, L. K. Shafer, V. & Gomes, H. (2008). CUNY Phonology Forum Conference on the Syllable. Graduate Center of City University of New York, NY.

Conner, P.S., Hyun, J., Kuckuk, R., Anema, I., Goral, M., O’Connor, B., Rubino, D., Obler, L. K. (2007). Age-related differences in strategies for idiom production, ASHA, Boston, MA.

Ijalba, E. & Obler, L.K. (2007). Use of a questionnaire in identifying “at risk” English Language Learners. ASHA, Boston, MA.

Neumann, Y., Obler, L. K., Shafer, V. & Gomes, H. (2007). Electrophysiological Evidence of Lexical Access Disruptions, Academy of Aphasia, Washington, D.C.

O’Connor, B., Goral, M., & Obler, L. K. (2007). The Importance of Verb-form Regularity in Agrammatism, Academy of Aphasia, Washington, D.C.

Datta, H., Karthikeyan, S., & Obler, L. K. (2007). Agrammatics’ Sensitivity to Inflectional Optionality, Academy of Aphasia, Washington, D.C.

Ijalba, E. & Obler, L. K. (2007). Phonological-Orthographical and Automaticity Deficits in Spanish-Speakers Learning English, Association of Latin American Linguistics and Philology of Northwest Europe (ALFAL-NE), Oxford.

O’Connor, B., Obler, L. K., & Goral, M. (2007). Neurolinguistics of Spanish: Agrammatism and Verb-Form Regularity, Association of Latin American Linguistics and Philology of Northwest Europe (ALFAL-NE), Oxford.

Datta, H., Obler, L. K., & Shafer, V (2007). Neural Bases of Lexical Attrition in Bengali- and English-speaking Individuals, International Symposium on Bilingualism, Hamburg.

Signorelli, T., Obler, L. K., & Haarmann, H. & Gitterman, M. (2007). Aging Memory Skills in Interpreters relative to Non-interpreters, International Symposium on Bilingualism, Hamburg.

Conner, P.C., Hyun, J. Anema, I., O’Connor, B., Rubino, D., Goral, M., & Obler, L.K. (2007). Idioms in the Mental Lexicon: Evidence from idiom production of younger and older adults, GSUC Science Day.

Rubino, D., Anema, I., O’Connor, B., Goral, M., Hyun, J., Conner, P. S., Kuckuk, R., & Obler, L.K. (2007). Aging and Idiom Production: Are idioms compositional, non-compositional, or both? University of Wisconsin at Madison Idioms Conference.

Obler, L.K., Goral, M., & Levy, E. (2007). Agrammatism in a Trilingual, GURT, DC.

Neumann, Y., Levy, E. &. Obler, L. K. (2007). Brain Processing of Symbolic Utterances, ILA, NYC.

Conner, P., Hyun, J., Anema, I., Rubino, D, Goral, M. & Obler, L.K. (2006). Idioms in the Mental Lexicon, Mental Lexicon Conference, Montreal.

Ijalba, E. & Obler, L.K. (2006). The influence of native language proficiency in learning ESL by adult Spanish-speakers , National Latino Education Summit, San Juan, Puerto Rico.

Ijalba, E. & Obler, L.K. (2006). The influence of native language proficiency in learning ESL by adult Spanish-speakers , National Latino Education Summit, San Juan, Puerto Rico.

Ijalba, E., Obler, L.K. (2005). Reading Problems in Adult Spanish-speakers Learning English, The International Dyslexia Association.

Schmidt, B., Obler, L.K. (2005). Working Memory Compensates for Poor Oral Reading in Adults, International Neuropsychological Society, Dublin, Ireland.

Ijalba, E., & Obler, L.K. (2005). Influencia Del Primer Sistema De Lectura Aprendido En La Adqusicion De Lectura En Una Segunda Lengua: Implicaciones En Relacion A La Dislexia, International Learning Conference, Granada, Spain.

Goral, M., Libben, G., Obler, L.K., & Jarema, G. (2005). L1 Attrition? Evidence from Compound Processing, The Research Institute of Studies of Language in Urban Society (RISLUS) Research Forum, NYC.

Signorelli, T., & Obler, L.K. (2005). Working Memory in Polyglots: A Comparison of Simultaneous Interpreters and Non-interpreter Multilinguals, Cognitive aspects of simultaneous interpreting workshop, Tolouse, France.

Maxfield, N.D., Obler, L.K., Monereau-Murray, M.-M., Rose, T., Rubino, D., & Shafer, V. (2005). Bilinguals’ Speech Perception in Noise: Converging Evidence from ERP and Behavioral Measures, Cognitive Neuroscience Meeting, New York.

Goral, M., Levy, E., Obler, L.K., & Cohen, E. (2005). Response Latency in Word Translation: Evidence from Multilingual Aphasia, International Symposium on Bilingualism, Barcelona, Spain.

Maxfield, N., Obler, L.K. & Shafer, V. (2005). Bilinguals’ Reliance on Context in L2 Comprehension Under Noise: ERP Measures, International Symposium on Bilingualism, Barcelona.

Speech Production, Acoustics and Perception Laboratory

The goals of the Speech Production / Acoustics and Perception Lab (SPAPL) are to understand the organization of the articulatory underpinnings of linguistic structure, to find the critical components of the acoustics for perceiving speech, and to explore the interrelationship between the two. Research in production currently focuses on examining tongue shape with ultrasound, tracking the movement of the jaw with both video and electro-articulometery data, measures to distinguish the speech of persons who stutter from those that do not, a physiological study of the Japanese nasal mora, and using electroglottography for feedback in second-language speech production. Research in perception has ongoing projects in the accessiblity of allophonic information in perception, perception of final stops by second-language speakers of English, and the calibration of synthetic vowel spaces to an assumed speaker vocal tract. Other projects are in the development phase.

Lab Meetings: Tuesdays, 12:00-1:30pm, room 7303

Mission Statement:

The goals of the Speech Production, Acoustics and Perception Lab (SPAPL) are to understand the organization of the articulatory underpinnings of linguistic structure, to find the critical components of the acoustics for perceiving speech, and to explore the interrelationship between the two. Research in production currently focuses on examining tongue shape with ultrasound, tracking the movement of the jaw with both video and electro-articulometry data, measures to distinguish the speech of persons who stutter from those that do not, a physiological study of the Japanese nasal mora, and using ultrasound for feedback in second-language speech production. Research in perception has ongoing projects in the realization of voicing of stops when they don’t contrast in voicing, vowel production in Down syndrome, tuning of formants to harmonics in singing, and effects of aging on articulatory organization.

Current Research Topics:

  • Articulatory/acoustic relations in vowels

  • Tongues grooves in constant harmony

  • Second language acquisition of Russian palatals

  • Articulation in aging and in Down syndrome

  • Voicing characteristics in speech and singing

  • Transcription and acoustics of babbling

  • Phonetics of endangered languages

Principal Investigator:

Douglas H. Whalen, Ph.D. 
Speech-Language-Hearing Sciences,
Graduate School and University Center,
City University of New York,
365 Fifth Ave
New York, New York 10016-4309

Phone: 212-817-8806
email: dwhalen@gc.cuny.edu

Faculty  
Douglas H. Whalen faculty photo
Douglas H. Whalen, Ph.D., Director Douglas H. Whalen joined the GC faculty in spring 2011. He is a distinguished professor in the Speech-Language-Hearing Sciences program and has a joint appointment in the Linguistics program. Dr. Whalen is also Vice President of Research at the Yale- and University of Connecticut-affiliated Haskins Laboratories (where he has been a researcher for thirty years) and is one of the world's leading scientists in the fields of speech and phonetics.

The central theme of Dr. Whalen's research is the interrelation of speech perception and speech production, and how the two constitute a single system and cannot be understood in isolation from one another. His work addresses a wide variety of populations (from developing infants being raised in different language environments to adult speakers of American English and Native American languages) and techniques (including behavioral approaches, MRI, ultrasound imaging of the tongue, and acoustics). He has served as a program officer at the National Science Foundation, overseeing two programs, Documenting Endangered Languages and Cognitive Neuroscience. He is the founder and Chair of the Board of Directors of the Endangered Language Fund, a non-profit organization sponsoring research on the documentation of languages that are falling silent, with a further emphasis on revitalization efforts.

Dr. Whalen also serves on the editorial boards of the Journal of Phonetics and Phonetica. He was elected a fellow of the Acoustical Society of America in 2008, and to the American Association for the Advancement of Science in 2013. He received his BA from Rice University and his Ph.D. (in Linguistics) from Yale University.
Graduate Students  
Headshot for the Speech-Language-Hearing Sciences program
Reethee Antony Research Interests: examination of speech in noise processing in adults and children, CVC harmony process in the Tahltan language, and the effects of digital signal processing in normal hearing listeners.
Headshot for the Speech-Language-Hearing Sciences program
Micalle Carl is a doctoral student in the Speech-Language-Hearing Sciences program. Micalle received her M.A. in Speech-Language Pathology from Lehman College and is a NYS-licensed practicing clinician.  Her research interests include motor speech disorders, the relationship between articulation and acoustics, and speech intelligibility. She is particularly interested in studying the articulatory characteristics and associated intelligibility of speech in individuals with developmental disabilities.
Headshot for the Speech-Language-Hearing Sciences program
Katherine Dawson is currently a 3rd level doctoral student in the speech lab. She received her undergraduate degree at the University of Bath (UK) in Applied Biology and worked in neuroscience research and the charity sector before entering the program. Her current focus is speech in aging and degenerative neurological conditions. She is particularly interested in how motor control and cognition changes across the lifespan, and the cognition-motor interface.
Headshot for the Speech-Language-Hearing Sciences program
Stephanie Kakadelis is a Ph.D. student in Linguistics. Her research interests are in phonetics, phonology, articulation, and the phonetics of laryngeal features. Her current research is investigating the articulation and acoustics of languages which do not have any laryngeal (voicing) contrasts in their consonant inventories.
Headshot for the Speech-Language-Hearing Sciences program
Jaekoo Kang is a doctoral student in the Speech-Language-Hearing Sciences program. His research interests include the mapping between articulation and acoustics (i.e., speech inversion), investigating the speech perception-production link, and sequence modeling of speech data with corresponding linguistic units (e.g., forced alignment of phonemes or VOT). http://jkang.commons.gc.cuny.edu/
Headshot for the Speech-Language-Hearing Sciences program
Grace Kim-Lambert is currently a 2nd level doctoral student in the Speech-Language-Hearing Sciences program. Her research interests include speech perception and production, and the link between the two. Her research goal is to further an understanding of the intersection between vision and hearing in cross-modal speech perception, specifically, to examine speech production and perception abilities among people who are congenitally blind, focusing on the perception of place of articulation in American English consonants.
Headshot for the Speech-Language-Hearing Sciences program
Richard Lissemore is an internationally acclaimed singing teacher, voice researcher, and performance coach who is equally adept at techniques for classical as well as popular vocal styles such as musical theater, rock, pop, R & B, and jazz. He has taught hundreds of students who perform on Broadway, at Carnegie Hall, at New York‘s Radio City Music Hall, in both American and international touring productions and in theaters throughout the world. He is especially well known for his innovative and entertaining master classes in vocal technique and performance, which have been presented regularly in New York, Canada, Korea, Mexico, and Germany.  As a guest speaker and clinician, he has taught workshops and masterclasses in voice pedagogy and performance for The Voice Foundation, the National Association of Teachers of Singing (NATS), the British Voice Association (BVA) at London’s Royal Academy of Music, the New York Singing Teachers Association (NYSTA), Columbia University, the University of Cincinnati’s College-Conservatory of Music, to name a few.  
Educated at The Juilliard School (Oren Brown), Cincinnati’s College-Conservatory of Music (Andrew White), and Rutgers University (Valorie Goodall), Mr. Lissemore enjoyed a varied performance career that encompassed opera, music theater, orchestra concert, oratorio, and voiceover for radio and television. His professional affiliations include Actors’ Equity Association (AEA), the National Association of Teachers of Singing (NATS), and The Voice Foundation.
He is presently pursuing a Ph.D. in Speech-Language-Hearing Sciences at The City University of New York, where he is a Graduate Teaching Fellow in the laboratory of Dr. Douglas Whalen. His research interests are centered around articulatory effects on vocal tract transfer functions in professional singers. Experimental protocols include electroglottography (EGG), acoustic analysis, ultrasound of the tongue, Optotrak infrared tracking of mouth and head positions, and VoceVista. Additionally, he is a founder of The Singing Voice Science Workshop, which is an annual gathering of voice researchers, singing teachers, and speech-language pathologists at Montclair State University in Montclair, NJ. Please visit www.RichardLissemore.com and www.SingingVoiceScience.com for more information.
Headshot for the Speech-Language-Hearing Sciences program
Ai Mizoguchi is a 3rd level doctoral student in the speech lab. She is interested in speech production and perception of second language learners. She is currently investigating tongue movements in speech production using ultrasound imaging.
Sejin Oh
Sejin Oh is a Ph.D. student in Linguistics. Her research interests are in phonetics, laboratory phonology, second language acquisition, and prosody. Her current research is investigating unstressed vowel reduction in Bulgarian. Future research plans include investigating positions of the tongue and jaw associated with vowel reduction using EMA.
Headshot for the Speech-Language-Hearing Sciences program
Kevin Roon is a post-doctoral associate in the speech production lab. He is interested in how ultrasound feedback can be used to aid in the acquisition of non-native language sounds, both in speech and in singing. As well as this work, his research interests include the process of phonological planning in speech production, the links between speech perception and production, and the nature of phonological representation. He also has a longstanding interest in Russian phonetics and phonology.
Alumni  
Headshot for the Speech-Language-Hearing Sciences program
Eric Jackson received his Ph.D. in 2015 for a dissertation entitled "Variability, stability, and flexibility in the speech kinematics and acoustics of adults who do and do not stutter." This work examined kinematic variability in stuttering and non-stuttering speakers, and it applied a dynamical perspective for a fuller explication of the patterns. Eric is also a speech-language pathologist with a focus on working with children and adults who stutter. He is currently an Assistant Professor in The Department of Communicative Sciences and Disorders at New York University.

 

Speech Production
Whalen, D. H., & Chen, W.-R. (2019). Variability and central tendencies in speech production. Frontiers in Communication, 4(49), 1-9. doi: 10.3389/fcomm.2019.00049.

Whalen, D. H. (2019).The Motor Theory of Speech Perception. In M. Aronoff (Ed.), Oxford Research Encyclopedia of Linguistics. Oxford University Press. doi: 10.1093/acrefore/9780199384655.013.404

Whalen, D. H. (2019). Phonetics. In M. Aronoff (Ed.), Oxford Research Encyclopedia of Linguistics: Oxford University Press. doi: 10.1093/acrefore/9780199384655.013.57

Preston, J. L., McCabe, P., Tiede, M. K., & Whalen, D. H. (2019). Tongue shapes for rhotics in school-age children with and without residual speech errors. Clinical Linguistics and Phonetics, 33, 334–348.

Preston, J. L., McAllister, T., Phillips, E., Boyce, S. E., Tiede, M. K., Kim, J. S., & Whalen, D. H. (2019). Remediating residual rhotic errors with traditional and ultrasound-enhanced treatment: A single-case experimental study. American Journal of Speech-Language Pathology, 28, 1167-1183.  

Chen, W.-R., Whalen, D. H., & Shadle, C. H. (2019). F0-induced formant measurement errors result in biased variabilities. Journal of the Acoustical Society of America, 145, EL360-EL366.

Cho, Taehong, Whalen, D. H., & Docherty, Gerard. (2019). Voice onset time and beyond: Exploring laryngeal contrast in 19 languages. Journal of Phonetics, 72, 52-65.

Whalen, D. H. (2019). Normalization of the Natural Referent Vowels. In A. M. Nyvad, M. Hejná, A. Højen, A. B. Jespersen, & M. H. Sørensen (Eds.), A sound approach to language matters(pp. 81-88). Aarhus: University of Aarhus.

Whalen, D. H., & McDonough, Joyce M. (2019). Under-researched languages: Phonetic results from language archives. In W. F. Katz & P. F. Assmann (Eds.), Routledge handbook of phonetics(pp. 51-71). Abingdon, Oxon; New York: Routledge.

Whalen, D. H. (2019). Arthur S. Abramson. Journal of Phonetics, 72, 83-84. doi: 10.1016/j.wocn.2019.01.001. 

Whalen, D. H., & Koenig, Laura L. (2018). Arthur S. Abramson. Language, 94, 969-975. 

Preston, J. L., McAllister, T., Phillips, E., Boyce, S. E., Tiede, M. K., Kim, J. S., & Whalen, D. H. (2018). Treatment for residual rhotic errors with high and low frequency ultrasound visual feedback: A single case experimental design. Journal of Speech, Language and Hearing Research, 61, 1875-1892. doi:10.1044/2018_JSLHR-S-17-0441
 
Whalen, D. H., Chen, W.-R., Tiede, M. K., & Nam, H. (2018). Variability of articulator positions and formants across nine English vowels. Journal of Phonetics, 68, 1-14.
 
Abramson, A. S., & Whalen, D. H. (2017). Voice Onset Time (VOT) at 50: Theoretical and practical issues in measuring voicing distinctions. Journal of Phonetics, 63, 75-86.
 
Whalen, D. H., Zunshine, L., Ender, E., Tougaw, J., Barsky, R. F., Steiner, P., ... Holquist, M. (2017). Validating judgments of perspective embedding: Further explorations of a new tool for literary analysis. Scientific Study of Literature.
 
Dawson, K. M., Tiede, M. K., & Whalen, D. H. (2016). Methods for quantifying tongue shape and complexity using ultrasound imaging. Clinical Linguistics and Phonetics, 30, 328-344.
 
Jackson, E. S., Tiede, M. K., Beal, D. S., & Whalen, D. H. (2016). The impact of social-cognitive stress on speech variability, determinism, and stability in adults who do and do not stutter. Journal of Speech, Language, and Hearing Research.
 
Jackson, E. S., Tiede, M. K., Riley, M. A., & Whalen, D. H. (2016). Recurrence quantification analysis of sentence-level speech kinematics. Journal of Speech, Language and Hearing Research.
 
Roon, K. D., Dawson, K. M., Tiede, M. K., & Whalen, D. H. (2016). Indexing head movement during speech production using optical markers. JournaI of the Acoustical Society of America, 139, EL167-EL171.
 
Roon, K. D., & Gafos, A. I. (2016). Perceiving while producing: Modeling the dynamics of phonological planning. Journal of Memory and Language, 89, 222-243.
 
Shadle, C. H., Nam, H., & Whalen, D. H. (2016). Comparing measurement errors for formants in synthetic and natural vowels. Journal of the Acoustical Society of America, 139, 713-727.
 
Whalen, D. H. (2016). Direct perceptions of Carol Fowler's theoretical perspective. Ecological Psychology, 28, 183-187.
 
Whalen, D. H., Moss, M. P., & Baldwin, D. (2016). Healing through language: Positive physical health effects of indigenous language use [version 1; referees: awaiting peer review]. F1000Research, 5(852). doi:10.12688/f1000research.8656.1
 
Dawson, K. M., Bomide, G., Tiede, M., & Whalen, D. H. (2015). A dual task study of the effects of increased cognitive load on speech motor control. Journal of the Acoustical Society of America, 138, 1779.
 
DiCanio, C. T., Nam, H., Amith, J. D., Whalen, D. H., & Castillo García, R. (2015). Vowel variability in elicited versus running speech: Evidence from Mixtec. Journal of Phonetics, 48, 45-59.
 
 
Jackson, E. S., Yaruss, J. S., Quesal, R. W., Terranova, V., & Whalen, D. H. (2015). Responses of adults who stutter to the anticipation of stuttering. Journal of Fluency Disorders, 45, 38-51.
 
Liu, F., Xu, Y., Prom-on, S., & Whalen, D. H. (2015). Computational modelling of double focus in American English. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 126, pp. 1-5). Glasgow: University of Glasgow.
 
Roon, K. D., & Gafos, A. (2015). Perceptuo-motor effects of response-distractor compatibility in speech: Beyond phonemic identity. Psychonomic Bulletin and Review, 22, 242-250.
 
Roon, K. D., Tiede, M. K., Dawson, K. M., & Whalen, D. H. (2015). Coordination of eyebrow movement with speech acoustics and head movement. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 472, pp. 1-5). Glasgow: University of Glasgow.
 
Whalen, D. H., Katsika, A., Tiede, M. K., & King, H. (2015). Acoustic measures of planned and unplanned coarticulation. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 767, pp. 1-5). Glasgow: University of Glasgow.
 
Whalen, D. H., & McDonough, J. M. (2015). Taking the laboratory into the field. Annual Review of Linguistics, 1, 395–415.
 
Whalen, D. H., Zunshine, L., & Holquist, M. (2015). Perspective embedding affects reading time: Implications for the reading of literature. Frontiers in Psychology, 6(1778), 1-9.
 
Yang, B., & Whalen, D. H. (2015). Perception and production of American English vowels by American males and females. Australian Journal of Linguistics, 35, 121-141.
 
Shadle, C. H., Nam, H., & Whalen, D. H. (2014). Accuracy of six techniques for measuring formants in isolated words. Journal of the Acoustical Society of America, 135, 2426-2426.
 
Whalen, D. H., Dawson, K. M., Carl, M., & Iskarous, K. (2014). Tongue shape complexity for liquids in Parkinsonian speech. Journal of the Acoustical Society of America, 135, 2389-2389.
 
Noiray, A., Iskarous, K., & Whalen, D. H. (2014). Variability in English vowels is comparable in articulation and acoustics. Laboratory Phonology, 5(2), 271-288.
 
Nam, H., Mooshammer, C., Iskarous, K., & Whalen, D. H. (2013). Hearing tongue loops: Perceptual sensitivity to acoustic signatures of articulatory dynamics. The Journal of the Acoustical Society of America, 134(5), 3808-3817.
 
DiCanio, C., Nam, H., Whalen, D. H., Bunnell, H. T., Amith, J. D., & García, R. C. (2013). Using automatic alignment to analyze endangered language data: Testing the viability of untrained alignment. The Journal of the Acoustical Society of America, 134(3), 2235-2246.
 
Nam, H., Goldstein, L. M., Giulivi, S., Levitt, A. G., & Whalen, D. H. (2013). Computational simulation of CV combination preferences in babbling. Journal of phonetics, 41(2), 63-77.
 
Nam, H., Goldstein, L. M., Giulivi, S., Levitt, A. G., & Whalen, D. H. (2013). Computational simulation of CV combination preferences in babbling. Journal of phonetics, 41(2), 63-77.
 
Whalen, D. H., Giulivi, S., Nam, H., Levitt, A. G., Hallé, P., & Goldstein, L. M. (2012). Biomechanically preferred consonant-vowel combinations fail to appear in adult spoken corpora. Language and speech, 0023830911434123.
 
Whalen, D. H., Zunshine, L., & Holquist, M. (2012). Theory of Mind and embedding of perspective: A psychological test of a literary “sweet spot”. Scientific study of literature, 2(2), 301.
 
Whalen, D. H., & Simons, G. F. (2012). Endangered language families. Language, 88(1), 155-173.
 
Yu, Y.H., Shafer, V.L., Sussman, E. & Whalen, D.H. (2012). Comparing language experience and task demands in Mandarin tone processing: Neurophysiological evidence. Journal of the Acoustical Society of America, 131(4):3233.
 
Irwin, J. R., Tornatore, L. A., Brancazio, L., & Whalen, D. H. (2011). Can children with autism spectrum disorders “hear” a speaking face?. Child development, 82(5), 1397-1403.
 
Giulivi, S., Whalen, D. H., Goldstein, L. M., Nam, H., & Levitt, A. G. (2011). An Articulatory Phonology account of preferred consonant-vowel combinations. Language Learning and Development. http://www.haskins.yale.edu/Reprints/HL1683.pdf
 
Honorof, D. N., & Whalen, D. H. (2010). Identification of speaker sex from one vowel across a range of fundamental frequencies. Journal of the Acoustical Society of America.
 
Irwin, J. R., Tornatore, L. A., Brancazio, L., & Whalen, D. H. (2011). Can children with autism spectrum disorders "hear" a speaking face? Child Development.
 
Iskarous, K., Fowler, C. A., & Whalen, D. H. (2010). Locus equations are an acoustic expression of articulator synergy. Journal of the Acoustical Society of America.
 
Iskarous, K., Nam, H., & Whalen, D. H. (2010). Perception of articulatory dynamics from acoustic signatures. Journal of the Acoustical Society of America, 127, 3717-3728.
 
McDonough, J., & Whalen, D. H. (2008). The phonetics of native North American languages. Journal of Phonetics, 36, 423-426.
 
Mongillo, E. A., Irwin, J. R., Whalen, D. H., Klaiman, C., Carter, A. S., & Schultz, R. T. (2008). Audiovisual processing in children with and without autism spectrum disorders. Journal of Autism and Developmental Disorders, 38, 1349-1358.
 
Golumbic, E., Deouell, L. Y., Whalen, D. H., & Bentin, S. (2007). Representation of harmonic frequencies in auditory memory: A mismatch negativity study. Psychophysiology, 44, 671-679
 
Whalen, D. H., Levitt, A. G., & Goldstein, L. M. (2007). VOT in the babbling of French- and English-learning infants. Journal of Phonetics, 35: 341-352.
 
Whalen, D. H., Iskarous, K., Tiede, M. K., Ostry, D. J., Lehnert-LeHouillier, H., Vatikiotis-Bateson, E., & Hailey, D.S. (2005). HOCUS, the Haskins Optically-Corrected Ultrasound System. Journal of Speech, Language, and Hearing Research, 48: 543-553.
 
Magen, H. S., Kang, A. M., Tiede, M., & Whalen, D. H. (2003). Posterior pharyngeal wall position in the production of speech. Journal of Speech Language and Hearing Research 46: 241-251.
 
Benson, R. R., Richardson, M., Whalen, D. H., & Lai, S. (2006). Phonetic processing areas revealed by sinewave speech and acoustically similar non-speech. NeuroImage, 31: 342-353.
 
Irwin, J. R., Whalen, D. H., & Fowler, C. A. (2006). A sex difference in visual influence on heard speech. Perception and Psychophysics, 68, 582-592.
 
Whalen, D. H., Benson, R. R., Richardson, M., Swainson, B., Clark, V., Lai, S., et al. (2006). Differentiation for speech and nonspeech processing within primary auditory cortex. Journal of the Acoustical Society of America, 119, 575-581.
 
Whalen, D. H., & Lindblom, B. E. (2006). Speech, biological basis. In K. Brown (Ed.), Encyclopedia of language and linguistics (2nd ed.), vol. 12, pp. 61-68. Oxford: Elsevier.
 
Honorof, D. N., & Whalen, D. H. (2005). Perception of pitch location within a speaker's F0 range. Journal of the Acoustical Society of America, 117: 2193-2200.
 
Whalen, D. H., Magen, H. S., Pouplier, M., Kang, A. M., & Iskarous, K. (2004). Vowel production and perception: Hyperarticulation without a hyperspace effect. Language and Speech 47: 155-174.
 
Whalen, D. H., Magen, H. S., Pouplier, M., Kang, A. M., & Iskarous, K. (2004). Vowel targets without a hyperspace effect. Language, 80: 377-378.
 
Whalen, D. H. (2004). How the study of endangered languages will revolutionize linguistics. In P.van Sterkenburg (Ed.), Linguistics today: Facing a greater challenge (pp. 321-342). Amsterdam: John Benjamins.
 
Peterson, B. S., Vohr, B., Kane, M. J., Whalen, D. H., Schneider, K. C., Katz, K. H., et al. (2002). A functional magnetic resonance imaging study of language processing and its cognitive correlates in prematurely born children. Pediatrics, 110: 1153-1162.
 
Benson, Randall R, D. H. Whalen, Matthew Richardson, Brook Swainson, Vincent Clark, Song Lai, and Alvin M. Liberman. (2001). "Parametrically dissociating speech and non-speech perception in the brain using fMRI." Brain and Language, 78: 364-396.
 
Liberman, Alvin M., and D. H. Whalen. (2000). "On the relation of speech to language." Trends in Cognitive Sciences, 4: 187-196.
Whalen, D. H., Gick, B., & LeSourd, P. S. (1999). Intrinsic F0 in Passamaquoddy vowels. In D. H. Pentland (Ed.), Papers from the 30th Algonquian conference (pp. 417-428). Winnipeg: University of Manitoba.
 
Whalen, D. H., A. Min Kang, Harriet Magen, Robert K. Fulbright, & John C. Gore, (1999). Predicting pharynx shape from tongue position during vowel production. Journal of Speech, Language and Hearing Research 42: 592-603.
 
Whalen, D. H., Gick, B., Kumada, M., & Honda, K.(1998). Cricothyroid activity in high and low vowels: Exploring the automaticity of intrinsic F0. Journal of Phonetics 27: 125-142.
 
Whalen, D. H., Bryan Gick, Masanobu Kumada, and Kiyoshi Honda. (1998). "EMG evidence for the automaticity of intrinsic F0 of vowels." In Proceedings of the 16th International Congress on Acoustics and 135th Meeting of the Acoustical Society of America, edited by Patricia K. Kuhl and Lawrence A. Crum, vol. 4, pp. 2951-2952.
 
Whalen, D. H. (1997). "What duplex perception tells us about speech perception." In Papers from the panels, CLS 33, edited by Kora Singer, Randall Eggert and Gregory Anderson, pp. 435-446. Chicago:Chicago Linguistic Society.
 
Whalen, D. H., Catherine T. Best, and Julia Irwin. (1997). "Lexical effects in the perception and production of American English /p/ allophones." Journal of Phonetics 25: 501-528.
 
Whalen, D. H., and J. M. Kinsella-Shaw. (1997). "Exploring the relationship of breath intake to utterance duration," Phonetica 54: 138-152.
 
Whalen, D. H., and Sonya Sheffert. (1997). "Normalization of vowels by breath sounds." In Talker variability in speech processing, edited by Keith Johnson and John W. Mullenix, pp. 133-144. New York: Academic Press.
 
Xu, Yi, Alvin M. Liberman, and D. H. Whalen. (1997). "On the immediacy of phonetic perception." Psychological Science 8: 358-362.
 
Whalen, D. H. and Alvin M. Liberman. (1996). "Limits on phonetic integration in duplex perception." Perception and Psychophysics, 58: 857-870.
 
Whalen, D. H., and Sonya Sheffert. (1996). "Perceptual use of vowel and speaker information in breath sounds." In Proceedings ICSLP 97, edited by H. Timothy Bunnell and William Idsardi, pp. 2494-2497.
 
Whalen, D. H. (1995). "Directions in speech perception research." In European studies in phonetics and speech communication, ed. by G. Bloothooft, V. Hazan, D. Huber, and J. Llisteri, pp. 76-80. CIP-Gegevens Koninklijke Bibliotheek: The Hague.
 
Whalen, D. H., and Andrea G. Levitt. (1995). "The universality of intrinsic F0 of vowels." Journal of Phonetics, 23: 349-366.
 
Whalen, D. H., Andrea G. Levitt, Pai-Ling Hsiao, and Iris Smorodinsky. (1995). "Intrinsic F0 of vowels in the babbling of 6-, 9- and 12-month-old French- and English-learning infants." Journal of the Acoustical Society of America, 97:2533-2539.
 
Selected Abstracts / Recent Presentations
 
Whalen, D. H. (2016). Evidence for vowel targets in formant distributions and within-syllable adjustments. Paper presented at the 15th Conference on Laboratory Phonology, Ithaca, NY.
 
Whalen, D. H., Tiede, M. K., & Chen, W.-R. (2016). Prediction of articulator positions from subsets of natural variability. Paper presented at the 15th Conference on Laboratory Phonology, Ithaca, NY.
 
DiCanio, C. T., & Whalen, D. H. (2015). The interaction of vowel length and speech style in an Arapaho speech corpus. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 513, pp. 1-5). Glasgow: University of Glasgow.
 
Goldenberg, D., Tiede, M. K., & Whalen, D. H. (2015). Aero-tactile influence on speech perception of voicing continua. In The Scottish Consortium for ICPhS 2015 (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (Vol. Paper number 814, pp. 1-5). Glasgow: University of Glasgow.
 
Whalen, D. H. (2014). Deliberate variability in speech for flexibly engaging other speakers. Paper
presented at the Finding Common Ground:  Social, Ecological, and Cognitive Perspectives on
Language Use, Storrs, CT.
 
Mizoguchi, A., & Whalen, D. H. (2014). Ultrasound evidence for place of articulation of the mora nasal/ɴ/in Japanese. The Journal of the Acoustical Society of America, 136(4), 2103-2103.
 
Buccheri, R. A., Whalen, D. H., Strange, W., McGarr, N. S., & Raphael, L. J. (2014). Effects of speaking mode (clear, habitual, slow speech) on vowels of individuals with Parkinson's disease. The Journal of the Acoustical Society of America, 135(4), 2294-2294.
 
Roon, K.D., Klein, E., & Gafos, A.I. (2014). Distractor effects on response times in fricative production. Proceedings of ISSP 10, Köln.
 
Jackson, E., Tiede, M., & Whalen, D. H. (2013). A comparison of kinematic and acoustic approaches to measuring speech stability between speakers who do and do not stutter. The Journal of the Acoustical Society of America, 134(5), 4206-4206.
 
Dawson, K. M., Iskarous, K., & Whalen, D. H. (2013). An analysis of tongue shape during parkinsonian speech. The Journal of the Acoustical Society of America, 134(5), 4204-4204.
Whalen, D. H., Shaw, P. Noiray, A. & Antony, R. (Jan 2012). Segments Calling Each to Each: Consonant harmony in Tahltan. Presented at the CUNY conference on the segment, New York City, NY.

Tools

•    Acoustic recording
•    Electroglottography
•    Visual tracking of articulators (Optotrak)
•    Electromagnetic Articulometry (EMA; Wave)
•    Ultrasound imaging of the tongue
•    Airflow recording