Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Edge-aided virtual view rendering for multiview video plus depth
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)ORCID iD: 0000-0003-3751-6089
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems. (Realistic3D)
Mid Sweden University, Faculty of Science, Technology and Media, Department of Information and Communication systems.
2013 (English)In: Proceedings of SPIE Volume 8650, Burlingame, CA, USA, 2013: 3D Image Processing (3DIP) and Applications 2013, SPIE - International Society for Optical Engineering, 2013, Art. no. 86500E- p.Conference paper, Published paper (Other academic)
Abstract [en]

Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.

Place, publisher, year, edition, pages
SPIE - International Society for Optical Engineering, 2013. Art. no. 86500E- p.
Keyword [en]
View rendering, 3DTV, multiview plus depth (MVD), depth-image-based-rendering (DIBR), warping
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:miun:diva-18474DOI: 10.1117/12.2004116ISI: 000322110500012Scopus ID: 2-s2.0-84878267120Local ID: STCISBN: 978-081949423-8 (print)OAI: oai:DiVA.org:miun-18474DiVA: diva2:605010
Conference
3D Image Processing (3DIP) and Applications 2013, 3-7 Feb 2013; Burlingame, Ca, USA, Conference 8650
Available from: 2013-02-12 Created: 2013-02-12 Last updated: 2017-08-22
In thesis
1. View Rendering for 3DTV
Open this publication in new window or tab >>View Rendering for 3DTV
2013 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research.

 

Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This

thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions.

 

The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.

Place, publisher, year, edition, pages
Sundsvall: Mid Sweden University, 2013. 49 p.
Series
Mid Sweden University licentiate thesis, ISSN 1652-8948 ; 101
Keyword
3DTV, view rendering, depth-image-based rendering, disocclusion filling, inpainting.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:miun:diva-19194 (URN)STC (Local ID)9789187103773 (ISBN)STC (Archive number)STC (OAI)
Presentation
2013-06-11, L111, Holmgatan10, Sundsvall, 10:15 (English)
Opponent
Supervisors
Available from: 2013-06-13 Created: 2013-06-12 Last updated: 2016-10-20Bibliographically approved

Open Access in DiVA

fulltext(494 kB)313 downloads
File information
File name FULLTEXT01.pdfFile size 494 kBChecksum SHA-512
e6c31e987c19cb463c7113f8e79ebc451c1db0616e2d934142ce0a642848e935d67544da627f42c008fe622c3df1e0dfe91661d55ae4def4b5f7329ba2751fcb
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records BETA

Muddala, Suryanarayana MurthySjöström, MårtenOlsson, RogerTourancheau, Sylvain

Search in DiVA

By author/editor
Muddala, Suryanarayana MurthySjöström, MårtenOlsson, RogerTourancheau, Sylvain
By organisation
Department of Information and Communication systems
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 313 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 986 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf