Cristóbal Pagán Cánovas (sites.google.com/site/cristobalpagancanovas/) is a Professor at the University of Murcia and a co-director of the Daedalus Lab: the Murcia Center for Cognition, Communication, and Creativity (daedalus.um.es). He works on language-gesture-voice patterns in multimodal communication, conceptual integration, distributed cognition, figurative language, the phraseology of oral poetic traditions, and the cognitive operations underlying creativity across literature, compositional practices in music, visual representations, and everyday language.
Cristóbal is coordinating the MULTIDATA platform for video analysis, an initiative also involving Radboud University Nijmegen and FAU Erlangen-Nürnberg, and funded by an Erasmus Plus grant for Cooperation in Higher Education. MULTIDATA is building an online platform for extracting and analyzing multimodal data from videos. It will also include a forum for the multimodal data-science community to interact and a set of multimedia tools to make the most of the platform for teaching and research. Cristóbal will demonstrate the test version of this platform, which currently allows users to automatically transcribe videos from 40+ languages and retrieve the corresponding time-aligned subtitles, process the videos with computer vision software that detects key body points, obtain normalized coordinates that allow for detailed reconstructions and modeling of body motion and gesture trajectories, process the videos with automatic speech analysis software that renders data for intensity and pitch of voice, and generate R and Python dataframes and visualizations representing frame-by-frame time series of all these data in a video collection.
To showcase the potential of these tools, Cristóbal will also present preliminary results of his MULTIFLOW project (daedalus.um.es/?page_id=32), funded by a national grant from Spain's Ministry of Science. These studies model correlations between semantic distinctions and gestural patterns extracted from thousands of videos from the NewsScape Library of Television News, a video collection developed by the Red Hen Lab (www.redhenlab.org/), an international consortium for research into multimodal communication. These results indicate that lexical choices have a systematic influence on the interplay between voice and body in oral communication.
|