Towards Embodied Perspective: Exploring rst-person, stereoscopic, 4K, wall-sized rendering of embodied sculpting
Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
The central goal of this thesis is creating and testing technology toproduce embodied interaction experiences. Embodied interaction is thesense that we inhabit a digital space with our minds operating on it asif it were our physical bodies, without conscious thought, but as naturalas reaching out with your ngers and touching the object in front of you.Traditional interaction techniques such as keyboard and mouse get in theway of achieving embodiment. In this thesis, we have created an embodiedperspective of virtual three-dimensional objects oating in front of a user.Users can see the object from a rst-person perspective without a headsupdisplay and can change the perspective of the object by shifting theirpoint of view. The technology and aordances to make this possible in aunobtrusive, practical and ecient way is the subject of this thesis.Using a depth sensor, Microsoft's Kinect , we track the user's positionin front of a screen in real-time, thus making it possible to changethe perspectives seen by each of the user's eyes to t their real point ofview, in order to achieve a 3D embodied interaction outside the screen.We combined the rst-person perspective into an embodied sculptingproject that includes a wireless haptic glove to allow the user to feel whentouching the model and a small one-hand remote controller used to rotatethe object around as the user desires when pressing its single button.We have achieved what we call Embodied Perspective, which involves anoutside-screen stereoscopic visualization, which reacts to body interactionas if the visualization was really where the user perceives it, thanks to thedata from the depth sensor. This method does not block the user's viewof their own body, but ts and matches their brain's perception.When applied to virtual sculpting (embodied sculpting), it gives theuser the ability to feel and understand much better their actions; wherethey are touching/sculpting and how they should move to reach wherethey want, since the movements are the same one would perform withtheir body in a real-world sculpting situation.A further study of the viability of this method, not only on singleperson interaction but on group visualization of a single user perspective,is discussed and proposed.
Place, publisher, year, edition, pages
IdentifiersURN: urn:nbn:se:kth:diva-155950OAI: oai:DiVA.org:kth-155950DiVA: diva2:763672