Linguistics Colloquium: Carol Neidle (BU)

OCT 10, 2013 | 4:15 PM TO 6:15 PM

Details

WHERE:

The Graduate Center
365 Fifth Avenue

ROOM:

6417

WHEN:

October 10, 2013: 4:15 PM-6:15 PM

ADMISSION:

Free

Description

TITLE: Crossdisciplinary Approaches to American Sign Language Research:  Linguistically Annotated Video Corpora for Linguistic Analysis and Computer-based Sign Language Recognition/Generation
 

ABSTRACT: After a brief introduction to the linguistic organization of American Sign Language (ASL), this talk will present an overview of collaborations between linguists and computer scientists aimed at advancing sign language linguistics and computer-based sign language recognition (and generation). Underpinning this research are expanding, linguistically annotated, video corpora containing multiple synchronized views of productions of native users of ASL. These materials are being shared publicly through a Web interface, currently under development, that facilitates browsing, searching, viewing, and downloading subsets of the data.
 
Two sub-projects will be highlighted: 
(1) Linguistic modeling used to enhance computer vision-based recognition of manual signs. Statistics emerging from an annotated corpus of about 10,000 citation-form sign productions by six native signers make it possible to leverage linguistic constraints to make sign recognition more robust.
(2) Recognition of grammatical information expressed through complex combinations of facial expressions and head gestures -- marking such things as topic/focus, distinct types of questions, negation, if/when clauses, relative clauses, etc. -- based on state-of-the-art face and head tracking combined with machine learning techniques. This modeling is also being applied to creation of more natural and linguistically realistic signing avatars. Furthermore, the ability to provide computer-generated graphs illustrating, for large data sets, changes in eyebrow height, eye aperture, and head position (e.g.) over time, in relation to the start and end points of the manual signs in the phrases with which non-manual gestures co-occur, opens up new possibilities for linguistic analysis of the nonmanual components of sign language grammar and for crossmodal comparisons.
 
The research reported here has resulted from collaborations with many people, including Stan Sclaroff and Ashwin Thangali (BU), Dimitris Metaxas, Mark Dilsizian, Bo Liu, and Jingjing Liu (Rutgers), Ben Bahan and Christian Vogler (Gallaudet), and Matt Huenerfauth (CUNY Queens College) and has been made possible by funding from the National Science Foundation.