Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Enabling physical action in computer mediated communication: an embodied interaction approach
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. (immersive interation lab (i2lab))
2015 (engelsk)Licentiatavhandling, med artikler (Annet vitenskapelig)
sted, utgiver, år, opplag, sider
Umeå: Department of Applied Physics and Electronics, Umeå University , 2015. , s. 40
Serie
Digital Media Lab, ISSN 1652-6295 ; 20
Emneord [en]
biologically inspired system, parallel robot, neck robot, head pose estimation, embodied interaction, telepresence system, quality of interaction, embodied telepresence system, Mona-Lisa gaze effect, eye-contact
HSV kategori
Identifikatorer
URN: urn:nbn:se:umu:diva-108569ISBN: 978-91-7601-321-2 (tryckt)OAI: oai:DiVA.org:umu-108569DiVA, id: diva2:853612
Veileder
Tilgjengelig fra: 2015-09-15 Laget: 2015-09-14 Sist oppdatert: 2018-06-07bibliografisk kontrollert
Delarbeid
1. Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
Åpne denne publikasjonen i ny fane eller vindu >>Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
2016 (engelsk)Inngår i: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, nr 5, s. 2597-2610Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

Emneord
Socially interactive robot, biologically inspired robot, head pose estimation, vision based robot control, model based design, embodied telepresence system
HSV kategori
Identifikatorer
urn:nbn:se:umu:diva-108552 (URN)10.3233/JIFS-169100 (DOI)000386532000015 ()
Tilgjengelig fra: 2015-09-14 Laget: 2015-09-14 Sist oppdatert: 2018-06-07bibliografisk kontrollert
2. Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
Åpne denne publikasjonen i ny fane eller vindu >>Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
2013 (engelsk)Inngår i: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, s. 149-153Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

Emneord
Head pose estimation, 3D geometric modeling, human motion analysis
HSV kategori
Forskningsprogram
datoriserad bildanalys
Identifikatorer
urn:nbn:se:umu:diva-82187 (URN)10.12792/icisip2013.031 (DOI)
Konferanse
The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing, Japan, 2013
Tilgjengelig fra: 2013-10-28 Laget: 2013-10-28 Sist oppdatert: 2019-05-09bibliografisk kontrollert
3. Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing
Åpne denne publikasjonen i ny fane eller vindu >>Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing
2014 (engelsk)Inngår i: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, s. 574-585Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

sted, utgiver, år, opplag, sider
Springer International Publishing Switzerland: , 2014
Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8518
Emneord
Embodied Interaction, Multimodal Interaction, HCI, Audio-Video Conferencing, Head Gesture, Tele-Presence
HSV kategori
Identifikatorer
urn:nbn:se:umu:diva-91017 (URN)10.1007/978-3-319-07626-3_54 (DOI)000342846900054 ()978-3-319-07625-6 (ISBN)978-3-319-07626-3 (ISBN)
Konferanse
3rd International Conference on Design, User Experience, and Usability (DUXU), JUN 22-27, 2014, Heraklion, GREECE
Tilgjengelig fra: 2014-07-07 Laget: 2014-07-07 Sist oppdatert: 2018-06-07bibliografisk kontrollert
4. A pilot user's prospective in mobile robotic telepresence system
Åpne denne publikasjonen i ny fane eller vindu >>A pilot user's prospective in mobile robotic telepresence system
Vise andre…
2014 (engelsk)Inngår i: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

sted, utgiver, år, opplag, sider
IEEE, 2014
Emneord
Teleconferencing, Collaboration, Computers, Estimation, Noise, Computer science, Human factors
HSV kategori
Identifikatorer
urn:nbn:se:umu:diva-108559 (URN)10.1109/APSIPA.2014.7041635 (DOI)000392861900123 ()978-6-1636-1823-8 (ISBN)
Konferanse
Annual Summit and Conference of Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA), DEC 09-12, 2014, Angkor, CAMBODIA
Tilgjengelig fra: 2015-09-14 Laget: 2015-09-14 Sist oppdatert: 2018-06-07bibliografisk kontrollert
5. Gaze perception and awareness in smart devices
Åpne denne publikasjonen i ny fane eller vindu >>Gaze perception and awareness in smart devices
2016 (engelsk)Inngår i: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, s. 55-65Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

sted, utgiver, år, opplag, sider
Elsevier, 2016
Emneord
Mona-Lisa gaze effect, gaze awareness, computer-mediated communication, eye contact, head gesture, gaze faithfulness, embodied telepresence system, tablet PC, HCI
HSV kategori
Identifikatorer
urn:nbn:se:umu:diva-108568 (URN)10.1016/j.ijhcs.2016.05.002 (DOI)000379367900005 ()
Tilgjengelig fra: 2015-09-14 Laget: 2015-09-14 Sist oppdatert: 2018-06-07bibliografisk kontrollert

Open Access i DiVA

fulltext(769 kB)402 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 769 kBChecksum SHA-512
35c68ac380014c580de506ec731669b7a1cd982fa2656398fd9c2af2d88ce0ba1fb76b08689cafc9796bcea7160fdd83e861f478475d531a19b87f620c9a6366
Type fulltextMimetype application/pdf

Søk i DiVA

Av forfatter/redaktør
Khan, Muhammad Sikandar Lal
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 402 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 509 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf