A Weighted Optimization Approach to Time-of-Flight Sensor Fusion
2014 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 23, no 1, 214-225 p.Article in journal (Refereed) Published
Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.
Place, publisher, year, edition, pages
IEEE Signal Processing Society, 2014. Vol. 23, no 1, 214-225 p.
Sensor fusion, range data, time-of-flight sensors, depth map upscaling, 3D video, stereo vision
IdentifiersURN: urn:nbn:se:miun:diva-20415DOI: 10.1109/TIP.2013.2287613ISI: 000329195500017ScopusID: 2-s2.0-84888373138OAI: oai:DiVA.org:miun-20415DiVA: diva2:669198
FunderKnowledge Foundation, 2009/0264
This work was supported in part by the KKFoundation of Sweden under Grant 2009/0264, in part by the EU Euro-pean Regional Development Fund, Mellersta Norrland, Sweden, under Grant 00156702, and in part by Länsstyrelsen Västernorrland, Sweden, under Grant 00155148.2013-12-032013-12-032014-07-24Bibliographically approved