Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Investigating joint referencing between VR and non-VR users and its effect on collaboration
Linköping University, Department of Computer and Information Science.
2018 (English)Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
Abstract [en]

Virtual Reality has until now seen limited actual use in society other than in the gaming industry. A reason for this could be its exclusively individual-viewpoint based nature and a lack of possible collaborative experiences together with people with no VR equipment. This study has investigated how joint visual reference points might help a VR and a non-VR user collaborate with each other using a repeated measures design with three conditions. In the experiment, where one user was equipped with a HTC Vive and the other stood in front of a large screen, the pair was presented 0, 4 or 9 joint visual reference points from their own viewpoint. Results of the tasks performed by the participants indicates that 9 joint visual reference points increased a pair’s collaboration efficiency. However, the effect was not present once joint attention had been fully established. Furthermore, non-VR users found it significantly harder giving instructions to the other user when there were no joint visual reference points available while the VR-users did not find it significantly harder to do so. Additionally, differences between VR users’ and non-VR users’ spatial orientation ability were found to predict different patterns over the three conditions. Judging from the results, it seems that for the VR-users, 4 reference points helped more than 0 and 9 helped more than 4. However, an interaction effect was found on the non-VR users between spatial orientation ability and visual reference points condition. 4 reference points had a counter-productive effect on task efficiency for the non-VR users with lower spatial orientation ability while 4 reference points did seem to help the higher spatial ability group. 9 joint visual reference points completely eliminated group differences between high and low spatial orientation ability groups for both VR users and non-VR users.

Place, publisher, year, edition, pages
2018. , p. 51
Keywords [en]
VR, Virtual Reality, non-VR, Collaboration, Cooperation, Two users, visual reference points, communication
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:liu:diva-149635ISRN: LIU-IDA/KOGVET-G--18/009--SEOAI: oai:DiVA.org:liu-149635DiVA, id: diva2:1232680
External cooperation
RISE Interactive Norrköping
Subject / course
Cognitive science
Presentation
2018-06-05, Linköping, 15:00 (Swedish)
Supervisors
Examiners
Available from: 2018-07-17 Created: 2018-07-12 Last updated: 2018-07-17Bibliographically approved

Open Access in DiVA

fulltext(1089 kB)34 downloads
File information
File name FULLTEXT01.pdfFile size 1089 kBChecksum SHA-512
2c3b7a43db692aa1c54484ffffe3c9183f3e09e9dc9565ed3d1e413d7f9940955ee1e1ebc7717b9023b36e8976bec30513d513de8c71d82662f54b2fd0a12892
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Bennerhed, Erik
By organisation
Department of Computer and Information Science
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 34 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 192 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf