


default search action
11th ICMI 2009: Cambridge, Massachusetts, USA
- James L. Crowley, Yuri A. Ivanov, Christopher Richard Wren, Daniel Gatica-Perez, Michael Johnston, Rainer Stiefelhagen:

Proceedings of the 11th International Conference on Multimodal Interfaces, ICMI 2009, Cambridge, Massachusetts, USA, November 2-4, 2009. ACM 2009, ISBN 978-1-60558-772-1
Keynote address I
- Cynthia Breazeal:

Living better with robots. 1-2
Multimodal communication analysis (Oral)
- Dinesh Babu Jayagopi

, Daniel Gatica-Perez
:
Discovering group nonverbal conversational patterns with topics. 3-6 - Sebastian Germesin, Theresa Wilson:

Agreement detection in multiparty conversation. 7-14 - Lei Chen, Mary P. Harper:

Multimodal floor control shift detection. 15-22 - Stavros Petridis, Hatice Gunes, Sebastian Kaltwang

, Maja Pantic:
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. 23-30
Multimodal dialog
- Dan Bohus, Eric Horvitz:

Dialog in the open world: platform and applications. 31-38 - Theofanis Kannetis, Alexandros Potamianos:

Towards adapting fantasy, curiosity and challenge in multimodal dialogue systems for preschoolers. 39-46 - Michael Johnston:

Building multimodal applications with EMMA. 47-54
Multimodal communication analysis and dialog (Poster)
- Kentaro Ishizuka, Shoko Araki

, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto:
A speaker diarization method based on the probabilistic fusion of audio-visual location information. 55-62 - Paul W. Schermerhorn, Matthias Scheutz

:
Dynamic robot autonomy: investigating the effects of robot decision-making in a human-robot team task. 63-70 - Giuseppe Di Fabbrizio, Thomas Okken, Jay G. Wilpon:

A speech mashup framework for multimodal mobile services. 71-78 - Sunsern Cheamanunkul, Evan Ettinger, Matthew Jacobsen, Patrick Lai, Yoav Freund:

Detecting, tracking and interacting with people in a public space. 79-86 - Neil Cooke

, Martin J. Russell:
Cache-based language model adaptation using visual attention for ASR in meeting scenarios. 87-90 - Iwan de Kok, Dirk Heylen:

Multimodal end-of-turn prediction in multi-party meetings. 91-98 - Shiro Kumano, Kazuhiro Otsuka, Dan Mikami, Junji Yamato

:
Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. 99-106 - Saturnino Luz

, Bridget Kane
:
Classification of patient case discussions through analysis of vocalisation graphs. 107-114 - Georgios N. Yannakakis

:
Learning from preferences and selected multimodal features of players. 115-118 - Ginevra Castellano, André Pereira, Iolanda Leite

, Ana Paiva
, Peter W. McOwan:
Detecting user engagement with a robot companion using task and social interaction-based features. 119-126 - Ian R. Fasel, Masahiro Shiomi

, Pilippe-Emmanuel Chadutaud, Takayuki Kanda
, Norihiro Hagita, Hiroshi Ishiguro:
Multi-modal features for real-time detection of human-robot interaction categories. 127-134 - Justine Cassell, Kathleen Geraghty, Berto Gonzalez, John Borland:

Modeling culturally authentic style shifting with virtual peers. 135-142 - Rui Fang, Joyce Y. Chai, Fernanda Ferreira:

Between linguistic attention and gaze fixations inmultimodal conversational interfaces. 143-150
Keynote address II
- Stephen A. Brewster:

Head-up interaction: can we break our addiction to the screen and keyboard? 151-152
Multimodal fusion (special session)
- Denis Lalanne

, Laurence Nigay, Philippe A. Palanque, Peter Robinson, Jean Vanderdonckt
, Jean-François Ladry:
Fusion engines for multimodal input: a survey. 153-160 - Hildeberto Mendonça, Jean-Yves Lionel Lawson, Olga Vybornova

, Benoît Macq, Jean Vanderdonckt
:
A fusion framework for multimodal interactive applications. 161-168 - Bruno Dumas, Rolf Ingold

, Denis Lalanne
:
Benchmarking fusion engines of multimodal interactive systems. 169-176 - Marcos Serrano

, Laurence Nigay:
Temporal aspects of CARE-based multimodal fusion: from a fusion mechanism to composition components and WoZ components. 177-184 - Jean-François Ladry, David Navarre

, Philippe A. Palanque:
Formal description techniques to support the design, construction and evaluation of fusion engines for sure (safe, usable, reliable and evolvable) multimodal interfaces. 185-192 - Tevfik Metin Sezgin

, Ian Davies, Peter Robinson:
Multimodal inference for driver-vehicle interaction. 193-198
Gaze, gesture, and reference (Oral)
- Thomas Bader, Matthias Vogelgesang, Edmund Klaus:

Multimodal integration of natural gaze behavior for intention recognition during object manipulation. 199-206 - Paul Piwek

:
Salience in the generation of multimodal referring acts. 207-210 - Tyler Baldwin, Joyce Y. Chai, Katrin Kirchhoff:

Communicative gestures in coreference identification in multiparty meetings. 211-218
Demonstration session
- Kazuhiro Otsuka, Shoko Araki

, Dan Mikami, Kentaro Ishizuka, Masakiyo Fujimoto, Junji Yamato
:
Realtime meeting analysis and 3D meeting viewer based on omnidirectional multimodal sensors. 219-220 - Nalini Vishnoi, Cody Narber, Zoran Duric, Naomi Lynn Gerber:

Guiding hand: a teaching tool for handwriting. 221-222 - Andrei Popescu-Belis

, Peter Poller, Jonathan Kilgour:
A multimedia retrieval system using speech input. 223-224 - Jan B. F. van Erp, Peter J. Werkhoven, Marieke E. Thurlings, Anne-Marie Brouwer

:
Navigation with a passive brain based interface. 225-226 - Vicente Alabau, Daniel Ortiz

, Verónica Romero
, Jorge Ocampo:
A multimodal predictive-interactive application for computer assisted transcription and translation. 227-228 - Victor S. Finomore, Dianne K. Popik, Douglas Brungart, Brian D. Simpson:

Multi-modal communication system. 229-230 - Bruno Dumas, Denis Lalanne

, Rolf Ingold
:
HephaisTK: a toolkit for rapid prototyping of multimodal interfaces. 231-232 - David Llorens, Andrés Marzal, Federico Prat, Juan Miguel Vilar:

State, : an assisted document transcription system. 233-234 - Jérôme Monceaux, Joffrey Becker

, Céline Boudier, Alexandre Mazel:
Demonstration: first steps in emotional expression of the humanoid robot Nao. 235-236 - Elena Mugellini

, Maria Sokhn
, Stefano Carrino, Omar Abou Khaled
:
WiiNote: multimodal application facilitating multi-user photo annotation activity. 237-238
Keynote address III
- Frédéric Kaplan:

Are gesture-based interfaces the future of human computer interaction? 239-240
Doctoral spotlight oral session
- Zheng Li, Xia Mao, Lei Liu:

Providing expressive eye movement to virtual agents. 241-244 - Angelika Dierker, Christian Mertes, Thomas Hermann

, Marc Hanheide
, Gerhard Sagerer:
Mediated attention with multimodal augmented reality. 245-252 - Stefanie Tellex, Deb Roy:

Grounding spatial prepositions for video search. 253-260 - Boris Schauerte, Jan Richarz, Thomas Plötz, Christian Thurau, Gernot A. Fink

:
Multi-modal and multi-camera attention in smart environments. 261-268
Multimodal devices and sensors (Oral)
- Rami Ajaj, Christian Jacquemin, Frédéric Vernier:

RVDT: a design space for multiple input devices, multipleviews and multiple display surfaces combination. 269-276 - Katayoun Farrahi

, Daniel Gatica-Perez
:
Learning and predicting multimodal daily life patterns from cell phones. 277-280 - Hendrik Iben, Hannes Baumann, Carmen Ruthenbeck, Tobias Klug:

Visual based picking supported by context awareness: comparing picking performance using paper-based lists versus lists presented on a head mounted display with contextual support. 281-288
Multimodal applications and techniques (poster)
- Nicolás Serrano, Daniel Pérez, Alberto Sanchís, Alfons Juan:

Adaptation from partially supervised handwritten text transcriptions. 289-292 - David Demirdjian, Chenna Varri:

Recognizing events with temporal random forests. 293-296 - Janani C. Sriram, Minho Shin, Tanzeem Choudhury, David Kotz

:
Activity-aware ECG-based patient authentication for remote health monitoring. 297-304 - László Kozma

, Arto Klami
, Samuel Kaski:
GaZIR: gaze-based zooming interface for image retrieval. 305-312 - Prasenjit Dey, Ramchandrula Sitaram, Rahul Ajmera, Kalika Bali:

Voice key board: multimodal indic text input. 313-318 - Jukka Raisamo, Roope Raisamo

, Veikko Surakka
:
Evaluating the effect of temporal parameters for vibrotactile saltatory patterns. 319-326 - Eve E. Hoggan, Roope Raisamo

, Stephen A. Brewster
:
Mapping information to audio and tactile icons. 327-334 - Teemu Tuomas Ahmaniemi, Vuokko Lantz:

Augmented reality target finding based on tactile cues. 335-342
Doctoral spotlight posters
- Sree Hari Krishnan Parthasarathi, Mathew Magimai-Doss, Daniel Gatica-Perez

, Hervé Bourlard:
Speaker change detection with privacy-preserving audio cues. 343-346 - Yannick Verdie, Bing Fang, Francis K. H. Quek:

MirrorTrack: tracking with reflection - comparison with top-down approach. 347-350 - Daniel Kelly, Jane Reilly Delannoy, John Mc Donald, Charles Markham:

A framework for continuous multimodal sign language recognition. 351-358

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














