Be(havioural)Chat explores body language in online communication and takes on the challenge imposed by the pandemic to integrate a greater range of sensory modalities in processes of digital exchange. Online face-to-face communication strips away body language, still present in physically colocated face-to-face communication, impoverishing day-to-day exchange and a sense of connection. Gesture theory suggests that a spectrum of modalities and sub-modalities form parallel communication channels, together weaving a tapestry for interpersonal communication and connection: facial-visual, oral-visual, oral-auditory, hand-visual and foot-visual. Conventional voice and video communication platforms still cannot compete with subtleties offered by whole body interaction in a shared physical setting. BeChat proposed an augmented online communication experience that integrates body language into face-to-face online communication. Users create and train a personalised avatar to reflect their own body language, which is then added into the communication interface in remote interactions. BeChat operates as a plug-in for existing communication and permitting a sense of shared experience and intimacy.
Parallel multimodal channels of communication
Face-to-face communication takes place in several different modalities and sub-modalities, illustrated in stylised fashion in Figure 1. The Figure shows five modalities which can be used to form parallel communication channels: facial-visual, oral-visual, oral-auditory, hand-visual, foot-auditory. Hand-auditory and foot-visual could have been added.
For instance, we can express love through facial expressions, language, gestures and body movements.
View body as a medium
Due to the limitation of devices and online communication scenarios,conventional voice and video communication platforms are frequently used by users, while the whole body interaction is mostly omitted. Maybe a personalised avatar will be the alternative for us to reflect our own body language in remote interaction.
Users can customise their avatars according to their own features. Via the technique of motion-capture via Webcam, users’ body motion data can be collected and transferred to their digital avatars. The dynamic animation of the avatars can be exported and saved in users’ behaviour libraries.
Body language expression in live chat
When people communicate, the modalities of facial, oral, and body expressions will coordinate together to make the expression more accurate.
When users sit in front of the computer and turn on the Webcam to speak, their facial expressions and gesture can be recognized via Webcam, and their speech can also be recognized via microphone. The user’s facial expression and speech can be collected as input; then, descriptive tags can be generated by sentiment analysis and speech signal processing technology. These tags can retrieve the most matching body movements from the user behaviour archive.
BeChat – A plug-in for existing communication
- Dafydd Gibbon, 2009. Gesture Theory is Linguistics: On Modelling Multimodality as Prosody.
- Andy Clark, 2008. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press.
- Sterelny, K., 2010. Minds: Extended Or Scaffolded?. Springer Science+Business Media B.V. 2010.
- Akdemir , N., 2018. Visible Expression Of Social Identity: The Clothing And Fashion. Gaziantep University Journal of Social Sciences.
- McGarry, A. and Jasper, J., n.d., 2015. The Identity Dilemma: Social Movements and Collective Identity. Temple University Press.
- Calhoun, C. (ed.), 1994. Social Theory and the Politics of Identity. Oxford: Blackwell.
- Andrea Bucci, 2018. We Are Not Alone: Perception and the Others. ISSN 2035-7109, BrainFactor.
- Kwastek, K. (2015). Aesthetics of interaction in digital art. Cambridge, Mass.: MIT Press.