ICMC2018: International Conference on Multimodal Communication
Find us on Wechat for the most recent conference news!
Hunan Normal University, 1-3 November 2018
Conference directors: ZENG Yanyu and Mark Turner
Assistant Directors: Prof. CHEN Minzhe and r. LIU Bai
See the Call for Papers for full instructions.
Click here for the Chinese announcement of the conference, from the College of Foreign Studies, Hunan Normal University.
The Annual Hunan Normal University International Conference on Languages and Cultures has as its 2018 theme MULTIMODAL COMMUNICATION.
ICMC2018 is hosted by Hunan Normal University and its
Center for Cognitive Science. It is organized by the College of Foreign Studies.
It builds on the tradition established by ICMC2017.
We encourage presentations on any aspect of multimodal communication, including topics that deal with word & image, language and multimodal constructions, paralinguistics, facial expressions, gestures, cinema, theater, role-playing games, . . . . The research domains can be drawn from literature and the arts, personal interaction, social media, mass media, group communication, . . . . We invite conceptual papers, observational studies, experiments, and computational, technical, and statistical approaches.
Thursday, 1 November: Methods Training Classes
Thursday, 1 November, evening: participants are invited to a lecture by Peter Knox, Director of the Baker-Nord Center for the Humanities at Case Western Reserve University. This talk is hosted by the College of Foreign Studies
Friday and Saturday, 2-3 November: Conference
Methods Training Classes. Thursday, 1 November
4 Methods Training Classes: 9-10:30am, 11am-12:30pm, 1-2:30pm, 3-4:30pm
- Francis Steen, University of California, Los Angeles
- Thomas Hoffmann, Professor and Chair of English Language and Linguistics, Katholische Universität
- Sandra Blakely, Emory University
In addition to plenary talks, parallel sessions, and a conference dinner, the conference will feature on 1 November plenary workshops on methods by leading methodological experts, each presenting a specific workflow, showing how specific methods can be applied to transform a research question into finished, publishable research products.
Detailed Descriptions of Plenary Talks and Plenary Workshops on Methods
- Francis Steen
Multimodal constructions employing tone of voice, hand gestures, gaze direction, facial expressions, and pose perform a uniquely efficient job of modulating the literal verbal meaning of utterances. In this talk, I will examine strategies of signaling epistemic stance, positive or negative deviance from expectation, and emotional coloration.
Title: Modulating meaning: How to convey the literally unspeakable
Bio: Associate Professor, Department of Communications, University of California, Los Angeles.
- Sandra Blakely
Bio: Emory University.
- Thomas Hoffmann
Over the past 30 years, evidence from cognitive linguistics, psycho- as well as neurolinguistics and research into language acquisition, variation and change has provided ample support for Construction Grammar, the theory that holds that arbitrary and conventionalized pairings of form and meaning are the central units of human language. Recently, several scholars have started to explore the idea of a Multimodal Construction Grammar, i.e. the notion that not only language, but also multimodal communication in general might be based on multimodal constructions. In this talk, I will take a closer look at the evidence for and against multimodal constructions. In particular, I will focus on the cognitive processes that produce multimodal utterances (i.e. the interaction of working memory and long term memory) as well as the role of inter-individual psychological differences (especially the Big-Five personality traits openness, conscientiousness, extroversion, agreeableness, and neuroticism). All data for this talk will be drawn from the Distributed Little Red Hen Lab (http://redhenlab.org).
Title: Multimodal Construction Grammar – Cognitive and Psychological Aspects of a Cognitive Semiotic Theory of Verbal and Nonverbal Communication
Bio: Professor and Chair of English Language and Linguistics, Katholische Universität Eichstätt-Ingolstadt.
Plenary Workshops on Methods
- Mark Turner, 9-10:30am, 1 November.
Abstract: The Distributed Little Red Hen Lab (http://redhenlab.org) has been developing new tools for several years, with support from various agencies, including Google, which has provided four awards for Google Summers of Code, in 2015, 2016, 2017, and 2018. These tools concern search, tagging, data capture and analysis, language, audio, video, gesture, frames, and multimodal constructions. Red Hen now has several hundred thousand hours of recordings, more than 4 billion words, in a variety of languages and from a variety of countries, including China. She ingests and processes about an additional 150 hours per day, and is expanding the number of languages held in the archive. The largest component of the Red Hen archive is called “Newsscape,” but Red Hen has several other components with a variety of content and in a variety of media. Red Hen is entirely open-source; her new tools are free to the world; they are built to apply to almost any kind of recording, from digitized text to cinema to news broadcasts to experimental data to surveillance video to painting and sculpture to illuminated manuscripts, and more. This interactive workshop will present in technical detail some topical examples, taking a theoretical research question and showing how Red Hen tools can be applied to achieve final research results.
Title: Overview of Red Hen Lab tools for the study of multimodal communication
Bio: Institute Professor and Professor of Cognitive Science, Case Western Reserve University. Co-director, the Distributed Little Red Hen Lab (http://redhenlab.org), Distinguished Visiting Professor and Director of the Center for Cognitive Science (http://18.104.22.168), Hunan Normal University.
- Dr. Weixin Li 北航. 11am-12:30pm. 1 November.
: Beihang University.
Title: Automated Methods and Tools for Facial Communication Expression Recognition
Facial expression, as a form of non-verbal communication, plays a vital role in conveying social information in human interactions. In the past decades, much progress has been made for automatic recognition of facial expressions in the computer vision and machine learning research community. Recent years also witness many successful attempts based on deep learning techniques. These automated methods and related tools for facial expression recognition are powerful especially when dealing with large-scale data corpus. This workshop will firstly provide an introduction to these methods and tools, and then show one campaign communication study which applies fully automated coding on a massive collection of news and tweets data.