The smartphone has become an intrinsic part of daily life, taking the role of a trusted companion in the context of communication technology. A persistent and widely documented issue in the domain of embodied technology, however, is the lack of natural interaction. As communication takes place not only through speech, but also through gestures such as facial expressions, gaze, head movements, hand movements and body posture. This research believe these are needed to fully support non-verbal communication and make interactions more engaging and efficient. In this research, This research focus on a telepresence (TP) robotic system (MobiBot) that affords the ability to convey non-verbal behaviours such as gesture and posture that can make the interactions more natural and life-like. Our expletory study focused specifically on the head rather than any other body part as it is a rich source of information for speech-related movement. This investigated the value of incorporating head movements into the use of telepresence robots as communication platforms by means of evaluating a system that manually reproduces head movement as closely as possible. Then, expanding the consideration of the physical embodiment of the system to include the head, shoulders and proximity, this research proposes a new protocol for the translation of the vocal stream into gesture to generate human-like behaviour and support more natural interaction within the embodied technology system.A modified version of the Undefined Technology of Acceptance and Use of Technology model (UTAUT) has been used to explore social and cognitive experience when using the system. Subjects' acceptance of the gesturally-supported video communication using for the MobiBot system was examined by comparing of the different methods of interaction on video calls using the MobiBot TP system. The comparison was between mimicking movement (where the operator replicates the movement by pressing buttons) and automated movement triggered by the user vocal stream. This was carried out in order to evaluate their effect. The results of the comparative analysis indicated that the mimicking interaction of the MobiBot system for video calls was preferred by the users over the vocal-triggered automatic interaction movement method. Evaluation and feedback of the movements incorporated suggests a mix of both vocal-triggered automatic and mimicking movements, using fewer large movements and more small and steady movements, is optimal. In addition, a set of guidelines was developed using the findings from both studied, for ‘personifying’ telepresence conversations and development of such systems. This research, in general, demonstrated significantly greater benefits from incorporating movement with such systems.
|Date of Award||1 May 2015|
- University Of Strathclyde
|Supervisor||Andrew Wodehouse (Supervisor) & Xiu Yan (Supervisor)|