“Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction”                               

 

Submissions Instruction and Publication


For authors:

Copyright form: copyright_lnai_ma3hmi.pdf



To submit your paper go here:


http://senldogo0039.springer-sbm.com/ocs/home/MA3HMI2014



Accepted papers will be published as post-proceedings in a volume of Springer LNAI.


Prospective authors are invited to submit full papers (10 pages), short papers and posters (6 pages).

Papers must be prepared according to the LNCS-LNAI style template

The revision process is double blind. All submissions must be anonymous.


All submissions will be refereed by experts in the field based on originality, significance, quality and clarity. Every submitted paper will be reviewed by at least two members of the Program Committee.



Call for Paper


The MA3HMI workshop aims to bring together researchers working on the analysis of multimodal recordings as a means to develop systems that can interact with humans. For instance, artificial agents can be regarded in their broadest sense, including virtual chat agents, empathic speech interfaces and life-style coaches on a smart phone. Of special interest are papers that combine real-time natural language processing with the analysis of other modalities.


Therefore, for this edition, we encourage researchers in speech technologies and natural language processing to present and discuss their ideas on multimodal analyses for real time applications.

We solicit papers that concern the different phases of the development of such human-machine interfaces, including the recording and online analysis of multimodal conversations, the modeling of the dialogue and the user evaluation of such systems. Tools and systems that address real-time conversations with artificial agents are also within the topic on the workshop.


Workshop topics include, but are not limited to:


(a)   Multimodal annotation

-       Representation formats for merged annotations of different modalities

-       Best practices for multimodal annotation procedures

-       Innovative multimodal annotation schemas or re-adaptation

-       Annotation and processing of multimodal data sets including proper feature extraction

-   Towards multimodal linked open data: new solutions for multimodal linked open data; Challenges in structuring multimodal data as linked open data; enhance existing dataset in the linked open data framework; etc.


(b)  Multimodal analyses

-       Multimodal understanding on the user’s input

-       Multimodal recognition of user behavior and affective state

-       Analysis of Human-Machine conversations

-       Strategies of Human-Machine Interaction

-       Analyses of speech content and acoustics

-       Analyses of linguistic content and its relations with acoustics

-       Mimicry and gesture analyses

-   Combination of linguistics, speech, and image processing


(c ) Applications, tools and systems

-       Novel applications

-       User studies

-       Tools for the recording, annotation, analysis of conversations

-       Human-machine interaction systems and tools for their development


Satellite workshop of INTERSPEECH 2014

MA³HMI 2014

Sponsored by:http://fastnet.netsoc.ie/FastNet/Projects/Entries/2012/1/31_Travels_through_the_east.html
Endorsed by:http://fastnet.netsoc.ie/FastNet/Projects/Entries/2012/1/31_Travels_through_the_east.html