Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Immersivemote: Combining foveated AI and streaming for immersive remote operations2019In: ACM SIGGRAPH 2019 Talks, SIGGRAPH 2019, Association for Computing Machinery (ACM), 2019, article id 3329895Conference paper (Refereed)
    Abstract [en]

    Immersivemote is a novel technology combining our former foveated streaming solution with our novel foveated AI concept. While we have previously shown that foveated streaming can achieve 90% bandwidth savings, as compared to existing streaming solutions, foveated AI is designed to enable real-time video augmentations that are controlled through eye-gaze. The combined solution is therefore capable of effectively interfacing remote operators with mission critical information obtained, in real time, from task-aware machine understanding of the scene and IoT data.

  • 2.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Mittal, Ashutosh
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Fanghella Valero, Alfredo
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Gaze- and QoE-aware video streaming solutions for mobile VR2017In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Association for Computing Machinery , 2017Conference paper (Refereed)
    Abstract [en]

    This demo showcases a novel approach to content delivery for 360? video streaming. It exploits information from connected eye-trackers embedded in the users' VR HMDs. The presented technology enables the delivery of high quality, in real time, around the users' fixations points while lowering the image quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The network connection between the VR system and the content server is in this demo emulated, allowing users to experience the QoE performances achievable with datarates and RTTs in the range of current 4G and upcoming 5G networks. Users can further control additional service parameters, including video types, content resolution in the foveal region and background and size of the foveal region. At the end of each run, users are presented with a summary of the amount of bandwidth consumed with the used system settings and a comparison with the cost of current content delivery solutions. The overall goal of this demo is to provide a tangible experience of the tradeoffs among bandwidth, RTT and QoE for the mobile provision of future data intensive VR services.

  • 3.
    Lungaro, Pietro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Saeik, Firdose
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Mateu Gisbert, Conrado
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Dubus, Gaël
    Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences2018In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 235-238Conference paper (Refereed)
    Abstract [en]

    This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.

  • 4. Lungaro, Pietro
    et al.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Saeik, Firdose
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Mateu Gisbert, Conrado
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Dubus, Gaël
    DriverSense: A hyper-realistic testbed for the design and evaluation of novel user interfaces in self-driving vehicles2018In: Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018, Association for Computing Machinery, Inc , 2018, p. 127-131Conference paper (Refereed)
    Abstract [en]

    This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented. 

  • 5.
    Saeik, Firdose
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Lungaro, Pietro
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Tollmar, Konrad
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Demonstration of Gaze-Aware Video Streaming Solutions for Mobile VR2018In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2018Conference paper (Refereed)
    Abstract [en]

    This demo features an embodiment of Smart Eye-tracking Enabled Networking (SEEN), a novel content delivery method for optimizing the provision of 360° video streaming. SEEN relies on eye-gaze information from connected eye trackers to provide high quality, in real time, in the proximity of users' fixations points, while lowering the quality at the periphery of the users' fields of view. The goal is to exploit the characteristics of the human vision to reduce the bandwidth required for the mobile provision of future data intensive services in Virtual Reality (VR). This demo provides a tangible experience of the tradeoffs among bandwidth consumption, network performances (RTT) and Quality of Experience (QoE) associated with SEEN's novel content provision mechanisms.

  • 6.
    Tollmar, Konrad
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Lungaro, Pietro
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Valero, Alfredo Fanghella
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Mittal, Ashutosh
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab).
    Beyond foveal rendering: Smart eye-tracking enabled networking (SEEN)2017In: ACM SIGGRAPH 2017 Talks, SIGGRAPH 2017, Association for Computing Machinery (ACM), 2017, article id 3085163Conference paper (Refereed)
    Abstract [en]

    Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf