Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Depth Map Upscaling Through Edge Weighted Optimization
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media. (Realistic3D)ORCID iD: 0000-0002-2578-7896
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media. (Realistic3D)ORCID iD: 0000-0003-3751-6089
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media. (Realistic3D)
2012 (English)In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Atilla M. Baskurt, Robert Sitnik, SPIE - International Society for Optical Engineering, 2012, Art. no. 829008- p.Conference paper, Published paper (Refereed)
Abstract [en]

Accurate depth maps are a pre-requisite in three-dimensional television, e.g. for high quality view synthesis, but this information is not always easily obtained. Depth information gained by correspondence matching from two or more views suffers from disocclusions and low-texturized regions, leading to erroneous depth maps. These errors can be avoided by using depth from dedicated range sensors, e.g. time-of-flight sensors. Because these sensors only have restricted resolution, the resulting depth data need to be adjusted to the resolution of the appropriate texture frame. Standard upscaling methods provide only limited quality results. This paper proposes a solution for upscaling low resolution depth data to match high resolution texture data. We introduce We introduce the Edge Weighted Optimization Concept (EWOC) for fusing low resolution depth maps with corresponding high resolution video frames by solving an overdetermined linear equation system. Similar to other approaches, we take information from the high resolution texture, but additionally validate this information with the low resolution depth to accentuate correlated data. Objective tests show an improvement in depth map quality in comparison to other upscaling approaches. This improvement is subjectively confirmed in the resulting view synthesis.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2012. Art. no. 829008- p.
Keyword [en]
3DTV, depth map, upscaling, time-of-flight, view synthesis, optimization, edge detection
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:miun:diva-15805DOI: 10.1117/12.903921ISI: 000304302300007Scopus ID: 2-s2.0-84861935064Local ID: STCISBN: 978-081948937-1 (print)OAI: oai:DiVA.org:miun-15805DiVA: diva2:487445
Conference
3-Dimensional Image Processing (3DIP) and Applications II;Burlingame, CA;24 January 2012through26 January 2012;Code90039
Available from: 2012-02-16 Created: 2012-01-31 Last updated: 2017-08-22Bibliographically approved
In thesis
1. Depth Map Upscaling for Three-Dimensional Television: The Edge-Weighted Optimization Concept
Open this publication in new window or tab >>Depth Map Upscaling for Three-Dimensional Television: The Edge-Weighted Optimization Concept
2012 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

With the recent comeback of three-dimensional (3D) movies to the cinemas, there have been increasing efforts to spread the commercial success of 3D to new markets. The possibility of a 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community.

A central issue for 3DTV is the creation and representation of 3D content. Scene depth information plays a crucial role in all parts of the distribution chain from content capture via transmission to the actual 3D display. This depth information is transmitted in the form of depth maps and is accompanied by corresponding video frames, i.e. for Depth Image Based Rendering (DIBR) view synthesis. Nonetheless, scenarios do exist for which the original spatial resolutions of depth maps and video frames do not match, e.g. sensor driven depth capture or asymmetric 3D video coding. This resolution discrepancy is a problem, since DIBR requires accordance between the video frame and depth map. A considerable amount of research has been conducted into ways to match low-resolution depth maps to high resolution video frames. Many proposed solutions utilize corresponding texture information in the upscaling process, however they mostly fail to review this information for validity.

In the strive for better 3DTV quality, this thesis presents the Edge-Weighted Optimization Concept (EWOC), a novel texture-guided depth upscaling application that addresses the lack of information validation. EWOC uses edge information from video frames as guidance in the depth upscaling process and, additionally, confirms this information based on the original low resolution depth. Over the course of four publications, EWOC is applied in 3D content creation and distribution. Various guidance sources, such as different color spaces or texture pre-processing, are investigated. An alternative depth compression scheme, based on depth map upscaling, is proposed and extensions for increased visual quality and computational performance are presented in this thesis. EWOC was evaluated and compared with competing approaches, with the main focus was consistently on the visual quality of rendered 3D views. The results show an increase in both objective and subjective visual quality to state-of-the-art depth map upscaling methods. This quality gain motivates the choice of EWOC in applications affected by low resolution depth.

In the end, EWOC can improve 3D content generation and distribution, enhancing the 3D experience to boost the commercial success of 3DTV.

Place, publisher, year, edition, pages
Sundsvall, Sweden: Mittuniversitetet, 2012. 57 p.
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 92
Keyword
3d video, 3DTV, video coding, capture, distribution, EWOC, depth map upscaling, time-of-flight
National Category
Signal Processing
Identifiers
urn:nbn:se:miun:diva-17048 (URN)978-91-87103-41-4 (ISBN)
Presentation
2012-11-22, O111, Mittuniversitetet - Holmgatan 10, Sundsvall, 09:00 (English)
Opponent
Supervisors
Available from: 2012-10-22 Created: 2012-09-24 Last updated: 2017-08-22Bibliographically approved
2. Gaining Depth: Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation
Open this publication in new window or tab >>Gaining Depth: Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success.

This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings. 

In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.

Place, publisher, year, edition, pages
Sundsvall: Mittuniversitetet, 2014. 228 p.
Series
Mid Sweden University doctoral thesis, ISSN 1652-893X ; 185
Keyword
3D video, Time-of-Flight, depth map acquisition, optimization, 3DTV, ToF, upsampling, super-resolution, sensor fusion
National Category
Computer Systems
Identifiers
urn:nbn:se:miun:diva-21938 (URN)STC (Local ID)978-91-87557-49-1 (ISBN)STC (Archive number)STC (OAI)
Public defence
2014-06-04, L111, Holmgatan 10, Sundsvall, 10:00 (English)
Opponent
Supervisors
Funder
Knowledge Foundation, 2009/0264
Available from: 2014-05-16 Created: 2014-05-14 Last updated: 2017-08-22Bibliographically approved

Open Access in DiVA

Schwarz_3DIP2012(543 kB)551 downloads
File information
File name FULLTEXT01.pdfFile size 543 kBChecksum SHA-512
511f0afb5f0de9bcd954cbefe6bba20d4719533c6902776e287565aa1dcae936a0b0f5d51f4a9bdcc4c31840bdd3984f9a84eee984a4feb702ef1c094664f249
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Schwarz, SebastianSjöström, MårtenOlsson, Roger
By organisation
Department of Information Technology and Media
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 551 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 941 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf