ICMC2018: International Conference on Multimodal Communication
Find us on Wechat for the most recent conference news!
Hunan Normal University, 1-3 November 2018
Conference directors: ZENG Yanyu and Mark Turner
Assistant Directors: Prof. CHEN Minzhe and Dr. LIU Bai
See the Call for Papers for full instructions.
Click here for the Chinese announcement of the conference, from the College of Foreign Studies, Hunan Normal University.
The Annual Hunan Normal University International Conference on Languages and Cultures has as its 2018 theme MULTIMODAL COMMUNICATION.
ICMC2018 is hosted by Hunan Normal University and its
Center for Cognitive Science. It is organized by the College of Foreign Studies.
It builds on the tradition established by ICMC2017.
We encourage presentations on any aspect of multimodal communication, including topics that deal with word & image, language and multimodal constructions, paralinguistics, facial expressions, gestures, cinema, theater, role-playing games, . . . . The research domains can be drawn from literature and the arts, personal interaction, social media, mass media, group communication, . . . . We invite conceptual papers, observational studies, experiments, and computational, technical, and statistical approaches.
Thursday, 1 November: 4 Workshops To Train Participants in Methods
Friday and Saturday, 2-3 November: Conference
Conference Plenary Speakers
- Sandra Blakely, Associate Professor of Classics, Emory University.
- Thomas Hoffmann, Professor and Chair of English Language and Linguistics, Katholische Universität Eichstätt-Ingolstadt
- SHI Yuzhi, Professor, Hunan Normal University
- Francis Steen, Associate Professor of Communication, University of California, Los Angeles
- WEN Xu,
Professor and Dean, School of Foreign Languages, Southwest University in Chongqing
- Arie Verhagen, Professor of Language, Culture, and Cognition, University of Leiden.
Plenary workshops on 1 November, the day before the conference
In addition to plenary talks, parallel sessions, and a conference dinner, the conference will feature on 1 November plenary workshops on methods by leading methodological experts:
- Mark Turner
- Dr. Weixin Li
- Shuwei Xu
- Professor Zhiming Yang
Detailed Descriptions of Plenary Talks and Plenary Workshops on Methods
- Francis Steen. 09:15-10:00, 2 November 2018.
Title: Modulating meaning: How to convey the literally unspeakable
Multimodal constructions employing tone of voice, hand gestures, gaze direction, facial expressions, and pose perform a uniquely efficient job of modulating the literal verbal meaning of utterances. In this talk, I will examine strategies of signaling epistemic stance, positive or negative deviance from expectation, and emotional coloration.
Bio: Associate Professor, Department of Communication, University of California, Los Angeles.
- WEN Xu. 10:15-11:00, 2 November 2018.
Title: Multimodal Metaphors of Emotion in Multimodal Discourse
Abstract: Metaphor is ubiquitous in everyday life, not just in language, but in thought, action, and some other fields such as pictures, music and dreams. The view that metaphor is not primarily a matter of language, but structures our thought and action, was first systematically presented by the two studies, i.e. Andrew Ortony's Metaphor and Thought (1979, 1993) and Lakoff and Johnson's Metaphors We Live By (1980). Since the foundation of Conceptual Metaphor Theory (CMT), conceptual metaphors have attracted researchers' much attention, and CMT itself has undergone revision and elaboration (e.g., Lakoff 1990, 1993; Lakoff and Johnson 1999; Kovecses 2010, 2015; Steen 2008, 2011). Although the idea of CMT has been applied to the research of multimodal metaphors, it has not been fully explored in the study of multimodal discourse. Forceville (2006: 381) suggests that "to further validate the idea that metaphors are expressed by language, as opposed to the idea that they are necessarily linguistic in nature, it is necessary to demonstrate that, and how, the can occur non-verbally and multimodally as well as purely verbally". This article has a tentative research into multimodal metaphors of emotion in multimodal discourse from the perspective of CMT, sticking to the argument that non-verbal metaphors and verbal metaphors share the same fundamental motivation, that is, the underlying mapping between conceptual or cognitive domains. This study strongly supports the idea that metaphor is not only a matter of language, but also a matter of concept and cognition.
Bio: Professor of linguistics, CC Party secretary of College of International Studies, Southwest University, Chongqing, China; co-editor of Cognitive Linguistic Studies (John Benjamins), editor of the Asian-Pacific Journal of Second and Foreign Language Education (Springer). His major research interests include cognitive linguistics, pragmatics, syntax-semantics interface, and discourse analysis. E-mail: firstname.lastname@example.org. Post address: College of International Studies, Southwest University, Beibei, Chongqing, China, 400715.
- Thomas Hoffmann. 11:00-11:45, 2 November 2018.
Over the past 30 years, evidence from cognitive linguistics, psycho- as well as neurolinguistics and research into language acquisition, variation and change has provided ample support for Construction Grammar, the theory that holds that arbitrary and conventionalized pairings of form and meaning are the central units of human language. Recently, several scholars have started to explore the idea of a Multimodal Construction Grammar, i.e. the notion that not only language, but also multimodal communication in general might be based on multimodal constructions. In this talk, I will take a closer look at the evidence for and against multimodal constructions. In particular, I will focus on the cognitive processes that produce multimodal utterances (i.e. the interaction of working memory and long term memory) as well as the role of inter-individual psychological differences (especially the Big-Five personality traits openness, conscientiousness, extroversion, agreeableness, and neuroticism). All data for this talk will be drawn from the Distributed Little Red Hen Lab (http://redhenlab.org).
Title: Multimodal Construction Grammar – Cognitive and Psychological Aspects of a Cognitive Semiotic Theory of Verbal and Nonverbal Communication
Bio: Professor and Chair of English Language and Linguistics, Katholische Universität Eichstätt-Ingolstadt.
Sandra Blakely. 09:00-09:45, 3 November 2018.
Title: Ritual, gaming, and the cognitive science of multimodal communication
Multimodal communication opens three key heuristic pathways into an ancient ritual, sealed by secrecy and fragmented by time. The ritual was a mystery cult, flourishing on the Aegean island of Samothrace from 600 BC – 400 AD. The realization of its promises – safety in travel at sea – relied on persuasion, communication, and the formation of moral networks that stretched from the Black Sea to Alexandria. The first pathway is at the archaeological site itself. Blended cognition lets us approach the monuments, iconography, texts and geospatial setting of the rites as a series of multimodal prompts that generate communal engagement in the complex of narratives whose recognition was fundamental to Greek identity. This approach is doubly welcome as a response to the gap between textual and material evidence which has characterized the site and complicated its investigation. The second pathway comes in the sea lanes of the ancient Mediterranean. The promises of the cult would be realized in spaces away from the island: a network analysis of the epigraphic record offers a model of information flow which turned the promise of the rites into a historical reality, bridging the cultural imaginarium and the lived experience of the rites’ initiates. The third pathway lies in a digital simulation. An interactive 3-D game opens the possibility to engage contemporary players in a gameworld populated by historically accurate data, and so measure social and strategic choices. The results foreground problem solving through ludic mindsets, which simulate strategic outcomes based on analogous multimodal engagement with mythic narrative, geospatial environment, and the challenges of mobility in an ancient sea.
Bio: Sandra Blakely is an Associate Professor in the Department of Classics at Emory University; she completed her doctorate in Classics and Anthropology at the University of Southern California. Her research focuses on bridges between social science and classical scholarship, with topical foci on Greek religion and ritual, comparative studies, fragmentary historical sources, and digital approaches to the ancient world. She is currently engaged in an anthropologically grounded analysis of the mystery cult of the Great Gods of Samothrace, integrating social network analysis, geographic information systems, and textual and iconographic analysis relevant to the position of the rites in the maritime world of its Greek and Roman initiates. Her research has benefited from generous fellowships from many institutions, including the Getty Research Institute, the National Endowment for the Humanities, the American School of Classical Studies in Athens, the W.F. Albright Institute of Archaeological Research in Jerusalem, and the American Academy in Rome.
- SHI Yuzhi. 09:45-10:30, 3 November 2018.
Title: General-Domain Cognition for Both Verbal and Nonverbal Communications
Abstract: The different views on the relationships between language and cognition divide the international linguistic society into two major camps: the generative linguistics and cognitive-functional linguistics. The former claims that there is a language faculty in human cognition, which is independent of other cognitive abilities, responsible for the invention and acquisition of language. Syntax is a formal system which is innate and autonomous, distinguishing it from semantics and pragmatics (Chomsky 1957, 1981, 1996, etc.). The latter argues that the linguistic competence and other cognitive abilities share the same cognitive mechanisms, called ‘general-domain cognition’ in the literature (Langacker 19887, Croft 2001, Goldberg 2003, Bybee 2013, etc.). In line with the theoretical hypothesis, this article will prove that CATEGORIZATION and SCHEMATIZATION, the two most fundamental cognitive abilities of human beings, have played a key role in language acquisition, linguistic expressions, as well as in nonverbal communications by means of gestures, facial expressions, intonations, and other medias. The evidence of our research shows that repetition, frequency, and conventionalization are all involved in both verbal and nonverbal communications. Furthermore, the diachronic data proves that the linguistic system is subject to the influences of various factors of the everyday activities through the general-domain cognitions.
Key words: generative linguistics, cognitive linguistics, general-domain cognition, categorization, schematization
Distinguished Professor of Hunan Normal University, Phd at Stanford University (1999), AP at National University of Singapore, Major interests in grammaticalization and cognitive linguistics, website: http://blog.sina.com.cn/shi63.
- Arie Verhagen. 10:30-11:15, 3 November 2018.
Title:Multimodal Communication, Iconicity, and Conventionality
There is a connection between multimodal communication and the two basic mechanisms of communication: depiction (iconic) and description (symbolic). Communication in the visual mode (gestures, sign language) systematically displays more iconicity than in the auditory mode. However, recent empirical and theoretical work suggests that the role of iconicity in the auditory mode may have been underestimated (Dingemanse et al. in TiCS 2015). The first point I will make in this talk is to add an important factor to the ones so far proposed, viz. the representation of speech and thought in narratives and ‘dialogic syntax’ phenomena (Du Bois in Cognitive Linguistics 2014), as these involve the (iconic) depiction of verbal action. The second point is that the tendency for auditory communication to (still) become symbolic (i.e. spoken language) to a larger extent and more readily than visual communication may be related to the combination of the public character of speech and the role of conventionality in a community (rather than a dyad) of communicators.
Arie Verhagen (PhD 1986) is presently professor of Language, Culture, and Cognition at Leiden University (The Netherlands), and a Research Leader at the University of Antwerp (Belgium). He previously held positions at the Free University in Amsterdam and the University of Utrecht. From 1996 till 2004, he served as editor-in-chief of the journal Cognitive Linguistics. His grammatical work includes studies on word order, passive, causative, connectives, wh-questions, complementation, and other construction types. With his 2005 monograph /Constructions of Intersubjectivity. Discourse, Syntax, and Cognition/ (Oxford University Press), he contributed to the so-called ‘social turn’ in cognitive linguistics. He has been a (co)supervisor in a number of externally funded interdisciplinary projects, on topics such as comparing cultural evolution in human language and bird song, computational modeling of language acquisition and language change, stylistics and rhetoric in literature and non-fiction, and dialogues in past and present communication. His research is framed in a (radically) usage-based approach (see Geeraerts 2016 for a recent overview), and focuses especially on the connection between grammar, discourse, and the highly developed human ability to understand other minds, as a basis for cooperation; a recent result is the volume Viewpoint and the Fabric of Meaning (De Gruyter, 2016, co-edited with Barbara Dancygier and Wei-lun Lu).
Plenary Workshops on Methods
- Mark Turner. 8:30-10am, 1 November.
Abstract: The Distributed Little Red Hen Lab (http://redhenlab.org) has been developing new tools for several years, with support from various agencies, including Google, which has provided four awards for Google Summers of Code, in 2015, 2016, 2017, and 2018. These tools concern search, tagging, data capture and analysis, language, audio, video, gesture, frames, and multimodal constructions. Red Hen now has several hundred thousand hours of recordings, more than 4 billion words, in a variety of languages and from a variety of countries, including China. She ingests and processes about an additional 150 hours per day, and is expanding the number of languages held in the archive. The largest component of the Red Hen archive is called “Newsscape,” but Red Hen has several other components with a variety of content and in a variety of media. Red Hen is entirely open-source; her new tools are free to the world; they are built to apply to almost any kind of recording, from digitized text to cinema to news broadcasts to experimental data to surveillance video to painting and sculpture to illuminated manuscripts, and more. This interactive workshop will present in technical detail some topical examples, taking a theoretical research question and showing how Red Hen tools can be applied to achieve final research results.
Title: Overview of Red Hen Lab tools for the study of multimodal communication
Bio: Institute Professor and Professor of Cognitive Science, Case Western Reserve University. Co-director, the Distributed Little Red Hen Lab (http://redhenlab.org), Distinguished Visiting Professor and Director of the Center for Cognitive Science (http://184.108.40.206), Hunan Normal University.
- Dr. Weixin Li 北航. 10:30am-12:00pm. 1 November.
Title: Automated Methods and Tools for Facial Communication Expression Recognition
Facial expression, as a form of non-verbal communication, plays a vital role in conveying social information in human interactions. In the past decades, much progress has been made for automatic recognition of facial expressions in the computer vision and machine learning research community. Recent years also witness many successful attempts based on deep learning techniques. These automated methods and related tools for facial expression recognition are powerful especially when dealing with large-scale data corpus. This workshop will firstly provide an introduction to these methods and tools, and then show one campaign communication study which applies fully automated coding on a massive collection of news and tweets data.
Bio: Beihang University.
- Professor Zhiming Yang. 2pm-3:30pm. 1 November.
Title: Multimodal Communication and Its Performance Assessment
Abstract: Multimodal Communication (MC) means communicating through more than one “mode”, such as verbal, visual, gestures, sign language, etc. Communication can be more efficient and effective if we use more than one mode in our schools and work places. Therefore, it is worthy to discuss ways of developing multimodal communication skills for students, teachers, and employees. Developing a Likert Multimodal Communication Appraisal (MCA) could be helpful for developing people’s MC skills. This workshop will introduce the general procedures of developing a MCA, such as writing a test specification, defining the construct of MC, designing the test blueprint, conducting item development, pilot study and field test, performing psychometric analysis with classical testing theory and item response theory, completing the scaling and norming, and providing score reporting. Some initial results from pilot studies on college and high school students will be presented at the meeting.
Bio: Professor and Director, Center for Assessment Research of Hunan Normal University. Executive Director of Psychometrics, Educational Records Bureau (2013-2016), Psychometrician, Educational Testing Services (2009-2013), Pearson (2008) and Harcourt Assessment (2003-2007).
Shuwei Xu, postgraduate student, STJU, China, 4pm-5:30pm. 1 November.
Title: Automatic Speech Recognition for Speech to Text, with a focus on Chinese
Decades worth of hand-engineered domain knowledge has gone into current state-of-the-art automatic speech recognition (ASR) pipelines. A simple but powerful alternative solution is to train such ASR models end-to-end, using deep learning to replace most modules with a single model. On such a system, built on end-to-end deep learning, we can employ a range of deep learning techniques: capturing large training sets, training larger models with high performance computing, and methodically exploring the space of neural network architectures. Ever since Deep Learning hit the scene in speech recognition, word error rates have fallen dramatically. At present, there are only a small number of commercially available speech recognition services available on the market, which are mainly occupied by a few large companies. It restricts users’ selection and optional features of start-ups, researchers, and even other companies that intend to introduce voice functions in their products and services. This summer, a Speech-to-Text conversion engine on Chinese was established for Red Hen Lab, resulting in a working application. The project has progressed to the point of being able to run DeepSpeech2 on PaddlePaddle inside Singularity on CWRU HPC and already had a perfect model developed by Baidu with its abundant Chinese materials. An ASR system based on PaddlePaddle and Kaldi has also been established.
Postgraduate student from SJTU, majored in Electrical Engineering. GSoC 2018 Successful Student, Chuntsung Scholar, Shanghai Jiao Tong University Outstanding Graduate, The Best Student of 2017 in Shanghai Jiaotong University.