The explosive growth of Over-the-top (OTT) online video strains capacity of operators’ networks, which severely threatens video quality perceived by end users. Since video is very bandwidth consuming, its distribution costs are becoming too high to scale with network investments that are required to support the increasing bandwidth demand. Content providers and operators are searching for solutions to reduce this video traffic load, without degrading their customers’ perceived Quality ofExperience (QoE). This paper proposes a method that can programmatically optimize video content for desired QoE accordingto perceptual video quality and device display properties, while achieving bandwidth and storage savings for content providers, operators, and end users. The preliminary results obtained with Samsung Galaxy S3 phone show that up to 60% savings can be achieved by optimizing movies without compromising the perceptible video quality, and up to 70% for perceptible, but not annoying video quality difference. Tailoring video optimization to individual user perception can provide seamless QoE delivery across all users, with a low overhead (i.e., 10%) required to achieve this goal. Finally, two applications of video optimization: QoE-aware delivery and storage, are proposed and examined.
This paper describes a novel QoE-aware adaptive video streaming method that enhances the viewing experience on mobile devices and reduces cellular network bandwidth consumed by Dynamic Adaptive Streaming over HTTP (DASH) by considering perceptual video quality and data rate channel conditions in the bitrate adaptation process. By streaming an optimized video for the particular video quality and channel conditions toa mobile device, we can improve the worst video qualities causedby DASH streaming and reduce quality variations using fewer number of bits.
The arrival of smartphones and tablets, along with a flat rate mobile Internet pricing model have caused increasing adoption of mobile data services. According to recent studies, video has been the main driver of mobile data consumption, having a higher growth rate than any other mobile application. However, streaming a medium/high quality video files can be an issue in a mobile environment where available capacity needs to be shared among a large number of users. Additionally, the energy consumption in mobile devices increases proportionally with the duration of data transfers, which depend on the download data rates achievable by the device. In this respect, adoption of opportunistic content pre-fetching schemes that exploit times and locations with high data rates to deliver content before a user requests it, has the potential to reduce the energy consumption associated with content delivery and improve the user's quality of experience, by allowing playback of pre-stored content with virtually no perceived interruptions or delays. This paper presents a family of opportunistic content pre-fetching schemes and compares their performance to standard on-demand access to content. By adopting a simulation approach on experimental data, collected with monitoring software installed in mobile terminals, we show that content pre-fetching can reduce energy consumption of the mobile devices by up to 30% when compared to the on demand download of the same file, with a time window of 1 hour needed to complete the content prepositioning.
This paper evaluates the energy cost reduction of Over-The-Top mobile video content prefetching in various network conditions. Energy cost reduction is achieved by reducing the time needed to download content over the radio interface by prefetching data on higher data rates, compared to the standard on demand download. To simulate various network conditions and user behavior, a stochastic access channel model was built and validated using the actual user traces. By changing the model parameters, the energy cost reduction of prefetching in different channel settings was determined, identifying regions in which prefetching is likely to deliver the largest energy gains. The results demonstrate that the largest gains (up to 70%) can be obtained for data rates with strong correlation and low noise variation. Additionally, based on statistical properties of data rates, such as peak-to-mean and average data rate, prefetching strategy can be devised enabling the highest energy cost reduction that can be obtained using the proposed prefetching scheme.
The current content provision methods and associated pricing and business models are challenged by the traffic requirements anticipated for future 'data intensive' services. In order to deliver substantially higher peak rates operators will need to deploy a much denser infrastructure and/or acquire more spectrum, thus significantly increasing their CAPEX and OPEX and reducing revenues. To improve the utilization of available network resources this paper presents ActiveCast, a disruptive content delivery paradigm that supports opportunistic content pre-fetching by introducing semantic and context awareness in the currently 'agnostic' networking paradigm. The experimental investigations presented in the paper focus on mobile video provision and a content provider, integrated with Facebook and YouTube, has been developed and used to identify socially relevant content for a set of test users. Part of the studies presented in the paper aim at experimentally understanding the structure of the energy costs associated with pre-fetching and on defining a delivery strategy that allows controlling the amount of energy invested. A comparison between a centralized implementation, in which pre-fetching is coordinated by the mobile operators, and an Over-The-Top (OTT) implementation of ActiveCast are also presented. The results show that complementing the context information available at individual user terminals with traffic information, shared by mobile operators through the ActiveCast API, can substantially reduce the energy costs of content delivery, as compared with 'on demand' video streaming. Additionally, opportunistically exploiting connections with WiFi APs can amplify the gains already achievable by prefetching on wide area networks.
This paper presents a novel platform for supporting humancentric design of future on-board user interfaces. This is conceived to facilitate the interplay and information exchange among onboard digital information systems, autonomous AI agents and human passengers and drivers. Two Human-to-AI (H2AI) Augmented Reality (AR) interfaces, characterized by different degrees of immersivity, have been designed to provide passengers with intuitive visualization of information available in the AI modules controlling the car behavior. To validate the proposed usercentric paradigm, a novel testbed has been developed for assessing whether H2AI solutions can be effective in increasing human trust in self-driving cars. The results of our initial experimental studies, performed with several subjects, clearly showed that visualizing AI information brings a critical understanding of the autonomous driving processes, which in turn leads to a substantial increase of trust in the system.
The apps paradigm is rapidly changing the way in which content is transferred and consumed in mobile networks. Currently, the traffic loads generated by apps are mostly provided as Over-The-Top (OTT) services, essentially transparent to the cellular network operators, while some of the content is delivered and updated in user terminals through "background transmissions", without user intervention. With increasing traffic volumes, associated to richer content and more advanced devices, the apps paradigm might create severe system inefficiencies. In this paper we explore a number of apps based content delivery methods and investigate their performances on multiple relevant dimensions,such as: terminal energy consumption, time for accessing in-apps content and their impact on the user experience associated with other mobile data services. The proposed methods include opportunistic content pre-fetching and are characterized by different degrees of context-awareness. One approach considers only context information at the individual user terminals, while another one includes network context information and assumes that content distribution coordination is performed by the network operators. The results show that when one is increasing the numbers of apps, whose content needs to be simultaneously maintained updated, only the operator-driven approach can be feasible, mainly from an energetic perspective. However, for a small number of maintained apps, pre-fetching schemes are superior to standard "on-demand" content delivery solutions, suggesting that pre-fetching should be limited only to the subset of terminal apps with higher user access probabilities.
Most of the currently deployed networks are typically dimensioned considering the "peak hour" traffic demand. Opportunistically utilizing these "excess" resources might be an effective way for improving utilization and lowering the "production" costs. This paper is proposing and evaluating a novel concept, called ActiveCast, and the corresponding architecture for a network and user behavior aware mobile content delivery system. When considering real traffic measurements in urban scenarios we showed that the concept improves the resource utilization and allows serving significantly more users in a pre-existing network. Even with moderate amounts of reliable context information, ActiveCast have been shown to drastically improving both user quality of experience perception and network efficiency, as compared to conventional on-demand content delivery schemes.
To opportunistically exploit excess resources avail-able at different times and locations we propose in this paperto include context aware information in the RRM schemesadopted in cellular networks. Both an user and a network centricapproaches to content pre-fetching are described and evaluatedfor different network dimensioning and service scenarios. Theobtained results show that for different levels of accuracyin predicting future content requests, operator controlled pre-fetching outperforms the user controlled approach, and that theformer can also bring robustness and significant cost reductionas compared to “classical” RRM schemes: the achieved level ofperformances can be mapped into a tri-dimensional gain regionwhere fewer BSs are needed, or more users can be served, orlarger files delivered per user, while maintaining a given levelof user-perceived service quality. Finally, considering also thedeployment of content caches at the BSs we show that the impactof backhaul limitations on experienced delays can be furthermitigated, when there are similar content interests among users.
A context aware model for delivery of content inmobile networks is introduced and studied through simulation.The model is based on predictive knowledge of mobile userand mobile group behavior. The predictive behavior is relatedto consumption of mobile content on smart mobile terminals.The model proposes to change the time and the location ofthe delivery of predicted content consumption to optimize thewireless network utilization and to improve the user experience.Different content delivery strategies, considering pre-fetchingat the user terminals, caching at the base stations and multi-cast wireless transmissions are proposed and investigated. Thesimulation results show substantial gains in the content deliveryefficiency of cellular networks and improved user perceivedquality for a number of realistic network operation regimes.
A context aware model for the delivery of contentin mobile networks is presented and studied through simulation.The proposed model is based on predictive knowledge of mobileuser behavior related to consumption of content on smartterminals. Content prediction is here exploited to change the timeand the location of content delivery, opportunistically utilizing theinstantaneously available excess of resources in the network. Thisis anticipated both to increase network utilization and to enhanceuser perceived service quality. However, since the accuracy of thecontext information, and associated content predictions, mighthave a significant impact on performances, our investigationshave been accounting for different content prediction capabilitiesas well as for various degrees of similarity in users’ contentrequests. The obtained results show that substantial gains bothin terms of wireless network efficiency and improved user serviceperception can be achieved, as compared to classical contentdelivery methods, for a number of realistic scenarios.
This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.
Immersivemote is a novel technology combining our former foveated streaming solution with our novel foveated AI concept. While we have previously shown that foveated streaming can achieve 90% bandwidth savings, as compared to existing streaming solutions, foveated AI is designed to enable real-time video augmentations that are controlled through eye-gaze. The combined solution is therefore capable of effectively interfacing remote operators with mission critical information obtained, in real time, from task-aware machine understanding of the scene and IoT data.
This paper explores the key tradeoffs for the design and optimization of eye-gaze based content provision for video streaming services. The proposed end-to-end solution, called "foveated content provision", uses real-time information from connected eye-trackers to dynamically deliver optimized video frames, with higher resolution in areas corresponding to the users' fovea while lowering the quality at the periphery. In this novel approach, the main system constraint is the achievable latency (RTT) in the communication link between content servers and user clients. To cope with various latency levels, several design choices are presented, including varying the size of the high quality region or the resolution for the areas in the user's peripheral field of view. The paper presents a set of experimental results, obtained with real users via a novel event-driven experience sampling method, which is specifically developed to address Quality of Experience (QoE) in foveated content delivery. The results show that several operating points within the system parameter space allows to deliver high levels of QoE, even at latency levels comparable to current 4G networks.
This demo showcases a novel approach to content delivery for 360? video streaming. It exploits information from connected eye-trackers embedded in the users' VR HMDs. The presented technology enables the delivery of high quality, in real time, around the users' fixations points while lowering the image quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The network connection between the VR system and the content server is in this demo emulated, allowing users to experience the QoE performances achievable with datarates and RTTs in the range of current 4G and upcoming 5G networks. Users can further control additional service parameters, including video types, content resolution in the foveal region and background and size of the foveal region. At the end of each run, users are presented with a summary of the amount of bandwidth consumed with the used system settings and a comparison with the cost of current content delivery solutions. The overall goal of this demo is to provide a tangible experience of the tradeoffs among bandwidth, RTT and QoE for the mobile provision of future data intensive VR services.
This demo presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing vehicular testbeds and simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remotely controlled or monitored vehicles, it is expected that the digital components of the driving experience will become more relevant. That is because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of self-driving interfaces have been implemented, including Heads-Up Display (HUDs), Augmented Reality (ARs) and directional audio.
This paper presents DriverSense, a novel experimental platform for designing and validating onboard user interfaces for self-driving and remotely controlled vehicles. Most of currently existing academic and industrial testbeds and vehicular simulators are designed to reproduce with high fidelity the ergonomic aspects associated with the driving experience. However, with increasing deployment of self-driving and remote controlled vehicular modalities, it is expected that the digital components of the driving experience will become more and more relevant, because users will be less engaged in the actual driving tasks and more involved with oversight activities. In this respect, high visual testbed fidelity becomes an important pre-requisite for supporting the design and evaluation of future onboard interfaces. DriverSense, which is based on the hyper-realistic video game GTA V, has been developed to satisfy this need. To showcase its experimental flexibility, a set of selected case studies, including Heads-Up Diplays (HUDs), Augmented Reality (ARs) and directional audio solutions, are presented.
A novel experimental approach for investigating the performances of context-aware content delivery schemes is presented in this paper. An innovative testbed, capable of remotely controlling multiple terminals, injecting a wide range of traffic loads in real networks and monitoring different performance measures has been developed and utilized for quantifying both the energy costs and user perceived service quality associated with different context- aware content pre-fetching schemes. In the implementation proposed in this paper, the context information required for performing content pre-fetching is extracted and utilized by individual user terminals and does not require any support from mobile operators. The performances of pre-fetching are compared to those of an on-demand content delivery scheme, for both video streaming and file downloading services. The results shows that not only pre-fetching can increase user service appreciation by reducing the time needed to access the information, but it can also significantly lower the amount of energy consumed in user terminals for retrieving the content. Our experiments further indicate that in order to achieve these additional energy gains only limited content prediction capabilities are required, thus making pre-fetching a solid candidate for the provision of a wide range of content types and services in both wide and local area networks.
In this paper semantic-aware model for radio resource management in wireless networks is introduced and studied through simulation. By semantic-awareness, the network can selectively manage the radio resource allocation based on the evaluation of transferred content, and its associated processing, and prioritize users that are close to experience interruptions, in order to improve the wireless resource utilization and the user’s Quality of Experience (QoE). Different radio resource management (RRM) strategies are proposed and investigated, considering buffer capacity at the terminals and the experience of the users in time while watching a video and waiting for resource allocation. The simulation results show that the users can reduce the total duration, frequency and length of the interruptions during a playback by applying semantic-awareness in the radio resource allocation, which might affect positively user’s QoE.
Current wireless networks have no knowledge on the type and characteristics of the specific mobile services and contents they are providing (they are agnostic). On the other hand, associated to richer content and more advanced devices, improvement of Quality of Experience (QoE) levels perceived by each user looks as one of the main goals of the different actors in the telecommunication's ecosystem. In this respect, network awareness of the transferred content types and their quality requirements can be important to deliver them fulfilling user requirements while optimizing the utilization of network resources. In this paper, we explore the effect of including semantic knowledge in the radio resource management (RRM) schemes of cellular systems as a way to reach this balance for video streaming service. To make this possible, we propose a number of semantic-aware RRM schemes using information about the video playback's current status, and investigate their performances based on the reduction of the total time of interruptions (TTI) perceived by users when a video streaming content is played. The achieved results are compared to those of the standard agnostic scheme implemented in the cellular networks. The outcomes show that incorporating semantic-aware RRM schemes, more users can receive the content provided by a network within a tolerable level of TTI, while the number of users receiving High Definition (HD) videos can be increased, which in general terms might reduce the investment in infrastructure to fulfill users' requirements.
This paper summarizes some of the initial research findings obtained in the SERMON project, funded by Wireless@KTH. The main focus of this paper is on video streaming transmission and the quantification of how much can be gained, in terms of user satisfaction and network resource utilization, by exploiting this semantic knowledge at network level. For this purpose, different QoE-centric RRM strategies are proposed and their performances evaluated in respect to a “classical” agnostic scheme, in a scenario where users have different QoE requirements for different content types and as a function of the device screen resolution during a live video streaming transmission.
This demo features an embodiment of Smart Eye-tracking Enabled Networking (SEEN), a novel content delivery method for optimizing the provision of 360° video streaming. SEEN relies on eye-gaze information from connected eye trackers to provide high quality, in real time, in the proximity of users' fixations points, while lowering the quality at the periphery of the users' fields of view. The goal is to exploit the characteristics of the human vision to reduce the bandwidth required for the mobile provision of future data intensive services in Virtual Reality (VR). This demo provides a tangible experience of the tradeoffs among bandwidth consumption, network performances (RTT) and Quality of Experience (QoE) associated with SEEN's novel content provision mechanisms.
This paper presents the Mobile Services Living Laboratory - a practical approach for evaluating service provision in cellular networks. Our approach promotes an end-to-end view of the system by providing an effective means to store information from both terminal and network sides, together with integrated mechanisms for retrieving feedback on user-perceived service quality. The current implementation of the Living Laboratory is described and evaluated. Further, we show how the Living Laboratory is used to investigate both the effectiveness of context-aware opportunistic content delivery schemes in cellular networks, and the coexistence between M2M and user-generated traffic.
Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.
This paper presents a novel experimental approach to quantify the performances of Quality of Experience (QoE)- aware resource management scheme in mobile network. The main goal of this paper is to improve network efficiency by exploiting knowledge of QoE information associated with online video streaming services. The investigations considered in the paper are performed using an innovative test-bed, developed to assess network efficiency for the provision of online video services of different qualities. The QoE model used in the proposed QoEaware allocation scheme assumes a MOS-like grading function whose grades depend on both the duration of playtime interruption and the streaming video quality (resolution). The results show that the proposed resource management scheme can deliver more than 40 percent higher QoE to the users of the system as compared to current agnostic (not aware of QoE requirement and content characteristics) service models.
Location-aware services and applications have be- come quite popular in the daily life of mobile users. Global Positioning System (GPS) is available in almost all new smart- phones as a mature and accurate positioning technique. GPS as a satellite-based navigation system, determines the current location of users by receiving signals from satellites. Satellite signals cannot propagate properly inside the buildings, which makes it unusable for indoor positioning. In addition, GPS consumes too much energy to be useful for many applications on mobile phones. There are many proposed alternatives for GPS but they are not as accurate. Combination of those alternatives can improve the accuracy, but varies widely depending on the user behavior and environment. This paper presents a novel architecture for semantic-aware positioning that chooses the best positioning method(s) by exploiting the semantic knowledge.