Show The Graduate Center Menu
 
 

Advanced NLP:Multimodal Human Machine Interaction - design and applications

Instructor

Nava Shaheed (Ling.)  

Prerequisites

  • Methods in CL 1

  • Programming skills are highly recommended

Description

A multimodal user interface for devices requires the integration of several recognition technologies together with sophisticated user interface and distinct tools for input and output of data. Multimodal interaction provides the mobile user with new complex multiple modalities of interfacing with a system such as: speech, gestures and movements, touch, type and more The course discusses the new world of multimodality User Interface, the technologies and design which are innovation and create a stat of the art user interface. We will discuss the commercial challenges and try to offer new approaches to these issues. The objective of the course is to expose the students to state of the art multimodal user interface technologies and to have them face design challenges so they will become familiar with the area of Mobility and Multimodality both from the technological aspect and from the usability aspect. Student will be required to suggest and design the architecture and dialog flow for a multimodal application. The design plane will be done using the tools and best practices acquired in the class.  

Methods of Instruction

Lecture with some practical classwork.

Course Requirements

  • Attendance. 80%  

  • Participation. in class activities  

  • Timely completion of all written assignments.  

Topic List

  • Intro Multimodality – Scope & Definition  

  • The development Smart interaction timeline history  

  • Why going multimodal?

    • Technology availability  

    • Users demands  

    • Market forces  

  • Multimodal system architectures

    I/O Technologies, Sensors, Data

  • Multimodal system architectures

    Structure drill down

  • Which technologies used in the mobile interaction

    Traditional, Advances, Creative

    • Mouse, Display, Key DTMF
    • Touch, Voice (ASR, TTS), Face Recognition, Gyro - Tilting/shaking
    • Digital pen, Voice biometrics SV, SI, Gesture Recognition
    • Emotion Detection, Eye tracking etc
  • Which technologies used in the mobile interaction: (cont.)

    Traditional, Advances, Creative

  • Final project planning – class work

  • Design implications of multimodality in applications

    Part A – Integration & Deployment issues

    • Infrastructure
    • Robustness
    • On line access
    • Standardization
  • Design implications of multimodality in Mobile application

    Part B –MMI: User interface, User Experience, Tools

  • Scenario planning – class work  

  • Multimodal applications in the real world  

  • Multimodal applications and products in the mobile environment

  • Usability testing & Quality Assurance methodology

  • Projects Presentations

Assessment

  • Midterm Presentation: 40% 

  • Final paper: 60%  

Students will be required to hand in the final paper in parts during the course.

Bibliograpgy

  1. List of current Articles – will be published  

  2. Kurkovsky, Stan. "Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability." IGI Global, 2010. 1-406. Web. 23 May. 2013. doi:10.4018/978-1-60566-978-6 

  3. Neustein, Amy; Markowitz, Judith A. (Eds.) “Mobile Speech and Advanced Natural Language Solutions” Springer 2013 ISBN 978-1-4614-6018-3